id
stringlengths
3
8
text
stringlengths
1
115k
st178268
Hey @Purvak-L, you can cd to the NCCL submodule in the third party folder in https://github.com/pytorch/pytorch/tree/master/third_party/nccl 22 and manually update the nccl module there. Another option is to install 2.7.6 locally and set USE_SYSTEM_NCCL=1 when building PyTorch. See this issue: https://github.com/pytorch/pytorch/issues/32286 33
st178269
mrshenli: USE_SYSTEM_NCCL=1 Thank you @mrshenli This helped! I downloaded the latest NCCL (2.7.6.1) and set the flag. After that I built pytorch from source using python setup.py install
st178270
I have a Sampler to do backpropagation through time, similar to the one in the torchnlp examples. This requires giving the batch size of training as an argument so that batches are properly continuous (e.g. if the dataset is [abcdefghi] and batch size is 3, the batches are [adf] [beh] [cfi]). My question is, how does this work in a distributed setting? From reading the code of the distributed sampler, I got the impression that each process gets its copy of the sampler. In this case, I would need to know the local batch size in every process to properly separate batches. Is there a general rule to determine batch size per process (like an equal portion per GPU?), and if not, how could one determine the local batch size?
st178271
Is there a general rule to determine batch size per process? Yes, if possible, it’s better that the data is evenly distributed on different processes, otherwise the process with lighter workload will have to frequently waiting for the stragglers, causing unnecessary slowdown. Compare to that, if you are using DistributedDataParallel (DDP) a more important thing is that all processes must have the same number of forward/backward iterations, otherwise collective communications in DDP backward would hang. how could one determine the local batch size? See discussion here: Should we split batch_size according to ngpu_per_node when DistributedDataparallel 4
st178272
I am using Distributed Data Parallel wich instantiates multiple processes to train model on multiple GPUs. I want to be able to save each experiment run backup to a single new folder (let’s say by passing the same timestamp to all the processes). However some processes being delayed by a second leads to a different timestamp within the same experiment run. Is it possible to pass the same information(single timestamp) to all the processes? I start my script with: CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 NCCL_LL_THRESHOLD=0 python \ -i \ -m torch.distributed.launch \ --master_port=9997 \ --nproc_per_node=8 \ main.py ..... Thanks
st178273
Solved by mrshenli in post #2 Does it work if you let rank0 process to broadcast its timestamp to other processes?
st178274
Does it work if you let rank0 process to broadcast 3 its timestamp to other processes?
st178275
Thanks a lot, that worked! I have one question though, I broadcast process with local rank 5’s seconds as a tensor, and thus all the processes have the same seconds after broadcasting. So how does it work, is it possible that a faster process uses an old value of the variable that is broadcasted later by the src process? Or do all the processes wait until the value has been broadcasted by the src variable? Current time on machine is : 3 2020-08-02_21:52:52 Current time on machine is : 4 2020-08-02_21:52:52 Current time on machine is : 2 2020-08-02_21:52:52 Current time on machine is : 5 2020-08-02_21:52:52 Current time on machine is : 7 2020-08-02_21:52:52 Current time on machine is : 6 2020-08-02_21:52:53 Current time on machine is : 1 2020-08-02_21:52:53 Current time on machine is : 0 2020-08-02_21:52:53 Before Broadcasting seconds: tensor([52], device='cuda:4') Before Broadcasting seconds: tensor([52], device='cuda:3') Before Broadcasting seconds: tensor([52], device='cuda:2') Before Broadcasting seconds: tensor([52], device='cuda:7') Before Broadcasting seconds: tensor([52], device='cuda:5') Before Broadcasting seconds: tensor([53], device='cuda:1') Before Broadcasting seconds: tensor([53], device='cuda:0') Before Broadcasting seconds: tensor([53], device='cuda:6') <broadcast using torch.distributed.broadcast(LongTensor(seconds), src=5)> After Broadcasting seconds tensor([52], device='cuda:6') After Broadcasting seconds tensor([52], device='cuda:1') After Broadcasting seconds tensor([52], device='cuda:7') After Broadcasting seconds tensor([52], device='cuda:2') After Broadcasting seconds tensor([52], device='cuda:0') After Broadcasting seconds tensor([52], device='cuda:5') After Broadcasting seconds tensor([52], device='cuda:4') After Broadcasting seconds tensor([52], device='cuda:3')
st178276
In the broadcast 1 API, there is an async_op, which by default is False. If it is False, all processes will block until the value is broadcasted. Otherwise, if it is True, broadcast will be non-blocking and return a Future-like object, and you can call wait() on that object. In this case, the tensor is only guaranteed to hold the result after wait() returns.
st178277
I need to do something like this: class MyOp(torch.autograd.Function): @staticmethod def forward(ctx, net1, net2, x): ctx.net1 = net1 ctx.net2 = net2 ctx.save_for_backward(x) return net1(x) @staticmethod def backward(ctx, grad): net1 = ctx.net1 net2 = ctx.net2 x = ctx.saved_tensors # disable backward for parameters in net2, because I only need the gradient for x by net2. for params in net2.parameters(): params.requires_grad_(False) with torch.enable_grad(): y = net2(x) y.backward(torch.ones_like(x).to(x)) gradx = x.grad.clone().detach() # enable backward for net2, because it needs to be used in other computations. for params in net2.parameters(): params.requires_grad_(True) return (None, None, gradx) This code works well for single-GPU. However, when I use DataParallel with Multi-GPUs, the gradient is wrong. I guess maybe it is because there is no lock for multi-processes and there are some gradients backwarded to parameters in net2. How can I correct my code for DataPrallel models?
st178278
Solved by ruotianluo in post #2 My guess the reason why it doesn’t work, is you can no longer get parameters on DataParallel replica. One workaround, (my guess), is to use torch.autograd.grad instead of backward. you can do: gradx = torch.autograd.grad(y, x, torch.ones_like(x).to(x))[0]
st178279
My guess the reason why it doesn’t work, is you can no longer get parameters on DataParallel replica. One workaround, (my guess), is to use torch.autograd.grad instead of backward. you can do: gradx = torch.autograd.grad(y, x, torch.ones_like(x).to(x))[0]
st178280
Can you set grads to True for net2’s parameters before you start the forward and then set them to false after you are done with the forward? This way the grads should be False for the entire backward pass despite concurrent execution of the backward pass across multiple GPUs.
st178281
there are model wrapped by nn.DataParallel self.model = Bert(6, 12, 513, 384*4, 64, 64, 2, 384, self.base_task.max_vocab_indexes['input_ids']) self.model = nn.DataParallel(self.model).cuda() and in model there are constant tensor, which name is pos. class Embedding(nn.Module): def __init__(self, maxlen, d_model, n_segments, vocab_size, device='cuda'): super(Embedding, self).__init__() self.device = device self.tok_embed = nn.Embedding(vocab_size, d_model) # token embedding self.pos_embed = nn.Embedding(maxlen, d_model) # position embedding self.seg_embed = nn.Embedding(n_segments, d_model) # segment(token type) embedding self.norm = nn.LayerNorm(d_model) def forward(self, x, seg): seq_len = x.size(1) pos = torch.arange(seq_len, dtype=torch.long, device=self.device) pos = pos.unsqueeze(0).expand_as(x) # (seq_len,) -> (batch_size, seq_len) embedding = self.tok_embed(x) + self.pos_embed(pos) + self.seg_embed(seg) return self.norm(embedding) I used this code with just one GPU, and this works. But this time I need to use more gpus, and I have to modify that tensor which was manually located device=self.device, to something “which is dynamically located each gpu at DataParallel”. … But it is hard to me, code below doesn’t works. The tensor just located in cpu, even with .cuda() How can I solve this issue? I am almost struggling with this issue all day… class Embedding(nn.Module): def __init__(self, maxlen, d_model, n_segments, vocab_size, device='cuda'): super(Embedding, self).__init__() self.device = torch.device('cuda') self.tok_embed = nn.Embedding(vocab_size, d_model) # token embedding self.pos_embed = nn.Embedding(maxlen, d_model).cpu() # position embedding self.seg_embed = nn.Embedding(n_segments, d_model) # segment(token type) embedding self.norm = nn.LayerNorm(d_model) self.pos = torch.arange(513, dtype=torch.long, requires_grad=False).unsqueeze(0) def forward(self, x, seg): # seq_len = x.size(1) print(f'pos device: {self.pos.device}') # printed by "cpu" pos = self.pos.expand_as(x) # (seq_len,) -> (batch_size, seq_len) cuda_pos = self.pos_embed(pos).cuda() print(f'pos device: {cuda_pos.device}, x device : {x.device}, seg device: {seg.device}') # printed by "cpu" for pos embedding = self.tok_embed(x) + cuda_pos + self.seg_embed(seg) return self.norm(embedding)
st178282
I also tried add that tensor to model’s input. But with this case, when I use 2 gpus, the result is just about half of one batch, (64 x ...) tensor returned, just half of one batch (128). 0/5120 [00:00<? ?it/s] Traceback (most recent call last): File "main.py", line 234, in <module> trainer.train() File "main.py", line 116, in train loss_lm = self.criterion(logits_lm.transpose(1, 2), batch.masked_tokens.transpose(0,1)) # for masked LM File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 932, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2317, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2113, in nll_loss .format(input.size(0), target.size(0))) ValueError: Expected input batch_size (64) to match target batch_size (128). and below is training loop. pos = torch.arange(513, dtype=torch.long, requires_grad=False).unsqueeze(0).to(device=torch.device('cuda')) for epoch in range(max_epoch): loss_sum, acc_sum, len_batch_sum = 0., 0., 0. ds_iter.init_epoch() tr_total = math.ceil(total_len / self.batch_size) tq_iter = tqdm(enumerate(ds_iter), total=tr_total, miniters=min_iters, unit_scale=self.batch_size, bar_format='{n_fmt}/{total_fmt} [{elapsed}<{remaining} {rate_fmt}] {desc}') self.model.train() print('epoch starts') for i, batch in tq_iter: self.model.zero_grad() print('batch starts') device = torch.device('cuda') print(device) print(f'batch.input_ids device : {batch.input_ids.device}, batch.segment_ids : {batch.segment_ids.device}, batch.masekd_pos : {batch.masked_pos.device}') print(f'batch.input_ids shape : {batch.input_ids.shape}, batch.segment_ids : {batch.segment_ids.shape}, batch.masekd_pos : {batch.masked_pos.shape}, pos : {pos.shape}') logits_lm, logits_clsf = self.model(batch.input_ids.transpose(0,1).to(device=device), batch.segment_ids.transpose(0,1).to(device=device), batch.masked_pos.transpose(0,1).to(device=device), pos.to(device=device)) print(f'logits_lm, logits_clsf shape : {logits_lm.shape}, {logits_clsf.shape}') and these are logs. epoch starts batch starts cuda batch.input_ids device : cuda:0, batch.segment_ids : cuda:0, batch.masekd_pos : cuda:0 batch.input_ids shape : torch.Size([513, 128]), batch.segment_ids : torch.Size([513, 128]), batch.masekd_pos : torch.Size([5, 128]), pos : torch.Size([1, 513]) pos device: cuda:0 # log from Embedding layer in the model pos device: cuda:0, x device : cuda:0, seg device: cuda:0 # log from Embedding layer in the model logits_lm, logits_clsf shape : torch.Size([64, 5, 6015]), torch.Size([64, 2]) logits_lm device: cuda:0, batch target device: cuda:0 0/5120 [00:00<? ?it/s] Traceback (most recent call last): File "main.py", line 236, in <module> trainer.train() File "main.py", line 118, in train loss_lm = self.criterion(logits_lm.transpose(1, 2), batch.masked_tokens.transpose(0,1)) # for masked LM File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 932, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2317, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2113, in nll_loss .format(input.size(0), target.size(0))) ValueError: Expected input batch_size (64) to match target batch_size (128).
st178283
Hey @cybaj, the model needs to be moved to GPU before passing it to DataParallel ctor. Have you tried changing the following code: cybaj: self.model = nn.DataParallel(self.model).cuda() to self.model = nn.DataParallel(self.model.to("cuda:0"))
st178284
BTW, here is the DataParallel tutorial: https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html 8
st178285
Thank you for reply! @mrshenli, I already tried with to("cuda:0") . and as far as I knew cuda() and to(device="cuda") is same. Below is the issues. In my model, I need to use some constant tensor, and I defined it in model’s forward as you can see top post. So I give torch.device('cuda') to model, and in model, locate that tensor to the device. more specified, device = torch.device('cuda') ... self.model = Bert(6, 12, 513, 384*4, 64, 64, 2, 384, self.base_task.max_vocab_indexes['input_ids'], device=device) self.model = nn.DataParallel(self.model) self.model = self.model.cuda() I ran it with 2 gpus, and it shows up right. device_count = torch.cuda.device_count() print(f'gpu count: {torch.cuda.device_count()}') gpu count: 2 dataset iterator is device=torch.device('cuda') DataParallel wrapped model with cuda() and model get returned, logits_clsf and logits_lm loss was calculated. and loss.backward() phase, I think stuck. print after loss.backward() wasn’t printed. belows are logs and the model. batch starts logits_lm device: cuda:0, batch target device: cuda:0 loss_lm calculated loss_clsf calculated print('epoch starts') for i, batch in tq_iter: self.model.zero_grad() print('batch starts') logits_lm, logits_clsf = self.model(batch.input_ids.transpose(0,1), batch.segment_ids.transpose(0,1), batch.masked_pos.transpose(0,1)) print(f'logits_lm device: {logits_lm.device}, batch target device: {batch.masked_tokens.device}') loss_lm = self.criterion(logits_lm.transpose(1, 2), batch.masked_tokens.transpose(0,1)) # for masked LM print('loss_lm calculated') loss_lm = (loss_lm.float()).mean() loss_clsf = self.criterion(logits_clsf, batch.is_next) # for sentence classification print('loss_clsf calculated') loss = loss_lm + loss_clsf loss.backward() print('loss backwarded') without any error message or logs, the log loss backwarded didn’t shows up. I think it’s stuck. I don’t know what I have to do. All the post in this thread, are my tries after this happened… I assumed that, in backward phase, ‘replica 0’, which is dedicated to cuda:0 is worked well(backward completed well), and waiting for ‘replica 1’, which is dedicated to cuda:1 to be completed with it’s backward, but in this case, some error occurred to him and failed to backward, so ‘replica 0’ keeps waiting and thus 'loss backwarded' log didn’t show up. Further more assuming that, even embedding tensor in each replica’s model, looks like splitted well and dedicated well like this, class Embedding(nn.Module): def __init__ ... def forward(self, x, seg): seq_len = x.size(1) pos = torch.arange(seq_len, dtype=torch.long, device=self.device) pos = pos.unsqueeze(0).expand_as(x) # (seq_len,) -> (batch_size, seq_len) pos.requires_grad = False print(f'pos tensor device: {pos.device}, shape: {pos.shape}') embedding = self.tok_embed(x) + self.pos_embed(pos) + self.seg_embed(seg) return self.norm(embedding) pos tensor device: cuda:1, shape: torch.Size([64, 513]) pos tensor device: cuda:0, shape: torch.Size([64, 513]) embedding tensor device: cuda:1, shape: torch.Size([64, 513, 384]) embedding tensor device: cuda:0, shape: torch.Size([64, 513, 384]) and about other cuda() defined tensor in the model, too context tensor device: cuda:1, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:1, shape: torch.Size([64, 513, 384]) context tensor device: cuda:1, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:1, shape: torch.Size([64, 513, 384]) context tensor device: cuda:1, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:1, shape: torch.Size([64, 513, 384]) context tensor device: cuda:0, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:0, shape: torch.Size([64, 513, 384]) context tensor device: cuda:1, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:1, shape: torch.Size([64, 513, 384]) context tensor device: cuda:0, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:0, shape: torch.Size([64, 513, 384]) context tensor device: cuda:1, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:1, shape: torch.Size([64, 513, 384]) context tensor device: cuda:0, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:0, shape: torch.Size([64, 513, 384]) context tensor device: cuda:1, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:1, shape: torch.Size([64, 513, 384]) context tensor device: cuda:0, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:0, shape: torch.Size([64, 513, 384]) context tensor device: cuda:0, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:0, shape: torch.Size([64, 513, 384]) context tensor device: cuda:0, shape: torch.Size([64, 513, 768]) multihead output tensor device: cuda:0, shape: torch.Size([64, 513, 384]) and the last logit gathered well, too, logtis_lm shape : torch.Size([128, 5, 6015]) There are some error could happened at backward phase, but I cannot even imagine about it… Or maybe the sum about loss, loss = loss_lm + loss_clsf or loss_lm.float().mean() after get gathered outputs, affects to backward phase, I think but … I don’t know what should I do. loss_lm = self.criterion(logits_lm.transpose(1, 2), batch.masked_tokens.transpose(0,1)) # for masked LM print('loss_lm calculated') print(f'loss_lm tensor device: {loss_lm.device}, shape: {loss_lm.shape}') loss_lm = (loss_lm.float()).mean() print(f'loss_lm tensor device: {loss_lm.device}, shape: {loss_lm.shape}') loss_clsf = self.criterion(logits_clsf, batch.is_next) # for sentence classification print('loss_clsf calculated') print(f'loss_clsf tensor device: {loss_clsf.device}, shape: {loss_clsf.shape}') loss = loss_lm + loss_clsf print(f'sumloss tensor device: {loss.device}, shape: {loss.shape}') loss.backward() self.optimizer.step() print('stepped')
st178286
cybaj: def forward(self, x, seg): seq_len = x.size(1) pos = torch.arange(seq_len, dtype=torch.long, device=self.device) pos = pos.unsqueeze(0).expand_as(x) # (seq_len,) -> (batch_size, seq_len) pos.requires_grad = False print(f'pos tensor device: {pos.device}, shape: {pos.shape}') embedding = self.tok_embed(x) + self.pos_embed(pos) + self.seg_embed(seg) return self.norm(embedding) pos tensor device: cuda:1, shape: torch.Size([64, 513]) pos tensor device: cuda:0, shape: torch.Size([64, 513]) Hey @cybaj, I am a little confused about the above code. IIUC, the self.device attribute on both replicas point to cuda:0, as replicate.py is not smart enough to change that for you. In that case how did pos successfully located to different devices? Looks like expand_as would not automatically change device either: >>> import torch >>> x = torch.arange(10, device="cuda:0") >>> y = torch.ones(10, 10).to(1) >>> z = x.expand_as(y) >>> z.device device(type='cuda', index=0) >>> y.device device(type='cuda', index=1) >>> x.device device(type='cuda', index=0) If you would like to get the correct device, can you read that from x.device? (input to forward should be scattered properly.)
st178287
Thank you, @mrshenli . I change all self.device to something like x.device on forwarding. … It looks like works but still, stuck after loss.backward() . All loss calculated and, loss tensor is on cuda:0, which is default output device. loss_lm calculated loss_lm : 72.27577209472656 loss_lm tensor device: cuda:0, shape: torch.Size([]) after mean loss_lm : 72.27577209472656 after mean loss_lm tensor device: cuda:0, shape: torch.Size([]) loss_clsf calculated loss_clsf : 0.7298979759216309 loss_clsf tensor device: cuda:0, shape: torch.Size([]) sumloss tensor device: cuda:0, shape: torch.Size([]) Why it is stuck when cuda:0 loss tensor starts to backward? …
st178288
Hey @cybaj, could you please share a self-contained min repro program? It will be hard to tell with just printed outputs.
st178289
It is totally my bad, thank you @mrshenli. I learned how to create and use the tensor at module’s forward in DataParallel from your advices. os.environ['CUDA_LAUNCH_BLOCKING'] = '1' is the fault… Someone who use this cuda option for checking the logs, because it stops DataParallel process, it is recommend to comment it…
st178290
I am experimenting with gradient compression techniques for reduced communication during distributed training. However, I found out that DDP by default averages the replica gradients with all-reduce. Is there some way to “turn this OFF”, since I will be aggregating the gradients in an encoded format?
st178291
On the master branch, there is a prototype DDP communication hook feature, which is built for this purpose: https://github.com/pytorch/pytorch/issues/39272 10 In prior releases (<= v1.6), there is no way to turn gradient averaging off without modifying C++ code and recompile. Update Synced with Sinan (author of this comm hook feature), this will be reverted due to perf regression. We are investigating.
st178292
Examples can be found here: https://github.com/pytorch/pytorch/blob/c76fada4a859742ac679013b7428017a782e1432/torch/nn/parallel/distributed.py#L607-L684 7 IIUC, as of today, the communication bucket is still divided by the world size even if the hook is enabled. We are working on removing that division. github.com pytorch/pytorch/blob/c76fada4a859742ac679013b7428017a782e1432/torch/csrc/distributed/c10d/reducer.cpp#L355-L356 1 auto wrapped = c10::scalar_to_tensor(double(1.) / process_group_->getSize());
st178293
Thanks Shen Li. I had in fact already seen the DDP communication hook PR and had interacted with SInan as well. I was actually looking for something more flexible which would allow me to measure the time and bits during communication. I will definitely check the comm hook once ready.
st178294
I was actually looking for something more flexible which would allow me to measure the time and bits during communication. It should be possible to do this in the current proposal of the communication hook. Could you elaborate a bit more on the limitations in the current proposal that might prevent us from doing these measurements?
st178295
As of now, I plan to measure the time taken for gradient accumulation, and the number of bits communicated (for each iteration). I might need to even find the bits communicated for each layer in the future to explore layer wise compression.
st178296
As of now, I plan to measure the time taken for gradient accumulation Are you referring to the AccumulateGrad 4 function? If so, the autograd profiler would display time taken for this function. and the number of bits communicated (for each iteration) This should be possible in the current proposal of the communication hook since you can add up the bits for all the buckets. I might need to even find the bits communicated for each layer in the future to explore layer wise compression. This is probably something that you can’t do with the existing hook since it provides the entire bucket to the user and currently there is no way to split out individual parameters from the bucket.
st178297
Hi everyone, when I train the model with 4 GPUs using pytorch distributed, I see error messages 4 times if there is any, how to make it only appear once? Thanks
st178298
Solved by mrshenli in post #3 If you want to suppress both stdout and stderr outputs, you can try this: import sys, os f = open(os.devnull, "w") if rank != 0: sys.stdout = f sys.stderr = f
st178299
If you start the training using a function that gets a rank argument passed, I used this: if rank == 0: print("error message")
st178300
If you want to suppress both stdout and stderr outputs, you can try this: import sys, os f = open(os.devnull, "w") if rank != 0: sys.stdout = f sys.stderr = f
st178301
Hello! I am trying to set up a training script using DistributedDataParallel (DDP) where the model changes between training and evaluation modes. However, when I try to switch into evaluation mode with model=model.eval() model becomes a NoneType. I also tried to use model=model.train(False) but the result was the same. My issue is reproduceable with modifying the DDP example 1, thus: import os import torch import torch.distributed as dist import torch.nn as nn import torch.optim as optim import torch.multiprocessing as mp from torch.nn.parallel import DistributedDataParallel as DDP def setup(rank, world_size): os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '12355' # initialize the process group dist.init_process_group("gloo", rank=rank, world_size=world_size) def cleanup(): dist.destroy_process_group() class ToyModel(nn.Module): def __init__(self): super(ToyModel, self).__init__() self.net1 = nn.Linear(10, 10) self.drop1 = nn.Dropout(p=0.6) self.relu = nn.ReLU() self.net2 = nn.Linear(10, 5) def forward(self, x): return self.net2(self.relu(self.drop1(self.net1(x)))) def demo_basic(rank, world_size): print(f"Running basic DDP example on rank {rank}.") setup(rank, world_size) # create model and move it to GPU with id rank model = ToyModel().to(rank) ddp_model = DDP(model, device_ids=[rank]) loss_fn = nn.MSELoss() optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) # Training mode print("Training") optimizer.zero_grad() outputs = ddp_model(torch.randn(20, 10)) labels = torch.randn(20, 5).to(rank) loss_fn(outputs, labels).backward() optimizer.step() # Evaluation mode print("Evaluating") ddp_model = ddp_model.eval() outputs = ddp_model(torch.randn(20, 10)) cleanup() def run_demo(demo_fn, world_size): mp.spawn(demo_fn, args=(world_size,), nprocs=world_size, join=True) if __name__ == "__main__": run_demo(demo_basic, 1) What is the proper way of switching between modes DDP? (Or it is not intended to be switched?) Thank you in advance!
st178302
Solved by mrshenli in post #5 Looks like we still miss that return at least in master. I am not sure whether some earlier changes were applied but got revert or not. Adding it in https://github.com/pytorch/pytorch/pull/42131 I was just starting out with DistributedDataParallel and was not sure whether it’s possible to switch m…
st178303
Hello, this issue hasn’t been fixed in 1.5.0, but has been fixed in 1.5.1: v1.5.0: def train(self, mode=True): super(DistributedDataParallel, self).train(mode) for module in self._module_copies[1:]: module.train(mode) is not returning self v1.5.1 def train(self, mode=True): self.training = mode for module in self.children(): module.train(mode) return self is returning self.
st178304
Thank you for your answer. Strangely enough I am using the version 1.5.1 and the line returning self is present in the train() function. I even tried to reinstall 1.5.1 after cleaning conda cache. Then I created a a new conda environment and installed pytorch with python 3.8 (as I originally was using 3.7). However, the problem was still there. The only thing I did not try was to insall the nightly-builds, as I could not download it within 7 minutes and lost patience. However, if the intended way of switching is not different from the non DistributedDataParallel case then I am glad. I was just starting out with DistributedDataParallel and was not sure whether its possible to switch modes, or one has to define the mode before using the wrapper or some other magic.
st178305
Looks like we still miss that return at least in master. I am not sure whether some earlier changes were applied but got revert or not. Adding it in https://github.com/pytorch/pytorch/pull/42131 8 I was just starting out with DistributedDataParallel and was not sure whether it’s possible to switch modes, or one has to define the mode before using the wrapper or some other magic. DDP’s train() and eval() should work as expected. Just please remember to wrap it with torch.no_grad() when running in eval mode.
st178306
I just tried it out with 1.6.0 but it seems your commit did not make it into it (or the issue is elsewhere ) On the other hand, thank you very much for mentioning torch.no_grad()! It was a feature I was not aware of yet, and helped me out tremendously.
st178307
Lastly, thank you @iffiX and @mrshenli for taking your time to answer. Both of you were a big help!
st178308
Dudly01: I just tried it out with 1.6.0 but it seems your commit did not make it into it (or the issue is elsewhere ) It will be included in v1.7. The branch cut date for v1.6 was a few weeks ago.
st178309
In the meantime I also realized that as my only intention is to switch my model to evaluation mode, I can also accomplish it with model.eval() and there is no real need for using model = model.eval(). I leave this here for future reference aiding ppl like me.
st178310
Hi, to speed up my training I was looking into pytorches DistributedDataParallel, since the docs state that DataParallel has a lot of overhead which reduces the speed. I tested a couple of hyperparameters and found weird behavior, which left me wondering if I oversaw something. I am running on a linux-64 bit cluster node with 64 cores, 350+ GB of ram and 4 Nvidia Tesla V100 (16 GB). I tested the stable 1.5.0 version and nightly 1.7.0.dev20200720 because I wanted to use automated mixed precision as another speed up. The model I was testing which is a BERT model from the transformer library, with a single linear layer and a BCEWithLogitsLoss. I tested three different training modes (all single machine): 1. a single GPU. 2. multi GPU with DataParallel. 3. multi GPU with DistributedDataParallel. Then I tested memory_pin, num_workers for the dataloader and mixed precision if possible. code for reference: import os from datetime import datetime from argparse import ArgumentParser import torch import torch.multiprocessing as mp import torch.distributed as dist from transformers import AdamW, BertConfig from prediction_module import path_vocab, path_raw_uniprot from prediction_module.protein_datasets import ProteinBertLabeledDataset from prediction_module.helpers import get_logger logger = get_logger(__file__) def train_model_dp(dataset, batch_size=4, n_steps=1000, num_workers=0, parallel=True, mixed_pres=False, pin_memory=False): from prediction_module.protein_models import ProteinBertForMultiLabel if mixed_pres: ProteinBertForMultiLabel.forward = torch.cuda.amp.autocast()(ProteinBertForMultiLabel.forward) torch.manual_seed(0) config = BertConfig( vocab_size=dataset.tokenizer.vocab_size, num_labels=dataset.num_labels, max_position_embeddings=dataset.input_size ) model = ProteinBertForMultiLabel(config) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") if torch.cuda.device_count() > 1 and parallel: batch_size = batch_size * torch.cuda.device_count() model = torch.nn.DataParallel(model) logger.debug(f"testing: {batch_size=} {num_workers=} {parallel=} {mixed_pres=} {pin_memory=}") dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, collate_fn=dataset.collate_fn, shuffle=False, num_workers=num_workers, pin_memory=pin_memory) model.to(device) model.train() optimizer = AdamW(model.parameters(), lr=1e-5) # create optimizer if mixed_pres: scaler = torch.cuda.amp.GradScaler() start = datetime.now() for epoch in range(1): # loop over the dataset multiple times for i, inputs in enumerate(dataloader): for k, v in inputs.items(): if isinstance(v, torch.Tensor): inputs[k] = v.to(device, non_blocking=True) # zero the parameter gradients optimizer.zero_grad() if mixed_pres: with torch.cuda.amp.autocast(): outputs = model(**inputs) loss = outputs[0] loss = loss.mean() # Backward and optimize scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() else: # forward + backward + optimize outputs = model(**inputs) loss = outputs[0] loss = loss.mean() loss.backward() optimizer.step() if i >= n_steps: break logger.debug("Training complete in: %s. normalized by batch size: %s", str(datetime.now() - start), str((datetime.now() - start) / batch_size)) def train_start(rank, world_size, batch_size=4, mixed_pres=False, pin_memory=True, num_workers=0, n_steps=1000, epochs=1): from prediction_module.protein_models import ProteinBertForMultiLabel if mixed_pres: ProteinBertForMultiLabel.forward = torch.cuda.amp.autocast()(ProteinBertForMultiLabel.forward) os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '12355' # initialize the process group dist.init_process_group("nccl", rank=rank, world_size=world_size) torch.manual_seed(0) torch.cuda.set_device(rank) dataset = ProteinBertLabeledDataset( vocab=path_vocab, csv_path=os.path.join(path_raw_uniprot, "raw_data.csv"), h5_path=os.path.join(path_raw_uniprot, "metled_go_data.h5") ) config = BertConfig( vocab_size=dataset.tokenizer.vocab_size, num_labels=dataset.num_labels, max_position_embeddings=dataset.input_size ) model = ProteinBertForMultiLabel(config) model.cuda(rank) model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[rank], output_device=rank, find_unused_parameters=True) model.train() optimizer = AdamW(model.parameters(), lr=1e-5) # create optimizer if mixed_pres: scaler = torch.cuda.amp.GradScaler() # Data loading code train_sampler = torch.utils.data.distributed.DistributedSampler(dataset, num_replicas=world_size, rank=rank) train_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=pin_memory, sampler=train_sampler, collate_fn=dataset.collate_fn) start = datetime.now() for epoch in range(epochs): for i, inputs in enumerate(train_loader): for k, v in inputs.items(): if isinstance(v, torch.Tensor): inputs[k] = v.cuda(rank, non_blocking=True) optimizer.zero_grad() if mixed_pres: with torch.cuda.amp.autocast(): outputs = model(**inputs) loss = outputs[0] loss = loss.mean() # Backward and optimize scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() else: outputs = model(**inputs) loss = outputs[0] loss = loss.mean() loss.backward() optimizer.step() if i >= n_steps: break if rank == 0: logger.debug("Training complete in: %s", str(datetime.now() - start)) dist.destroy_process_group() def train_model_ddp(world_size=4, mixed_pres=False, batch_size=4, pin_memory=False, num_workers=0, n_steps=1000): logger.debug(f"testing: {batch_size=} {num_workers=} {mixed_pres=} {pin_memory=}") mp.spawn(train_start, args=(world_size, batch_size, mixed_pres, pin_memory, num_workers, n_steps), nprocs=world_size, join=True) if __name__ == "__main__": try: from torch.cuda.amp import autocast mp_avail = True except ImportError: mp_avail = False parser = ArgumentParser() parser.add_argument("--test-dp", dest="test_dp", default=False, const=True, nargs="?") parser.add_argument("--test-ddp", dest="test_ddp", default=False, const=True, nargs="?") args = parser.parse_args() args_dict = vars(args) logger.debug("torch version: %s", torch.__version__) if args_dict["test_dp"]: dataset = ProteinBertLabeledDataset( vocab=path_vocab, csv_path=os.path.join(path_raw_uniprot, "raw_data.csv"), h5_path=os.path.join(path_raw_uniprot, "metled_go_data.h5") ) logger.debug("testing single gpu") train_model_dp(dataset, parallel=False) train_model_dp(dataset, parallel=False) if mp_avail: train_model_dp(dataset, parallel=False, mixed_pres=True) train_model_dp(dataset, parallel=False, num_workers=8) train_model_dp(dataset, parallel=False, num_workers=16) train_model_dp(dataset, parallel=False, pin_memory=True) logger.debug("testing dp") train_model_dp(dataset) train_model_dp(dataset, num_workers=8) train_model_dp(dataset, num_workers=16) train_model_dp(dataset, pin_memory=True) if mp_avail: train_model_dp(dataset, mixed_pres=True) if args_dict["test_ddp"]: logger.debug("testing ddp") train_model_ddp() train_model_ddp(pin_memory=True) train_model_ddp(num_workers=8) train_model_ddp(num_workers=16) if mp_avail: train_model_ddp(mixed_pres=True) The results: testing single gpu torch version: 1.5.0 testing: batch_size=4 num_workers=0 parallel=False mixed_pres=False pin_memory=False Training complete in: 0:02:48.407579. normalized by batch size: 0:00:42.101900 testing: batch_size=4 num_workers=0 parallel=False mixed_pres=False pin_memory=False Training complete in: 0:02:47.146963. normalized by batch size: 0:00:41.786745 testing: batch_size=4 num_workers=8 parallel=False mixed_pres=False pin_memory=False Training complete in: 0:02:49.422436. normalized by batch size: 0:00:42.355613 testing: batch_size=4 num_workers=16 parallel=False mixed_pres=False pin_memory=False Training complete in: 0:02:50.284026. normalized by batch size: 0:00:42.571010 testing: batch_size=4 num_workers=0 parallel=False mixed_pres=False pin_memory=True Training complete in: 0:02:47.878925. normalized by batch size: 0:00:41.969736 testing dp testing: batch_size=16 num_workers=0 parallel=True mixed_pres=False pin_memory=False Training complete in: 0:05:32.129513. normalized by batch size: 0:00:20.758095 testing: batch_size=16 num_workers=8 parallel=True mixed_pres=False pin_memory=False Training complete in: 0:05:28.702392. normalized by batch size: 0:00:20.543900 testing: batch_size=16 num_workers=16 parallel=True mixed_pres=False pin_memory=False Training complete in: 0:05:29.794879. normalized by batch size: 0:00:20.612181 testing: batch_size=16 num_workers=0 parallel=True mixed_pres=False pin_memory=True Training complete in: 0:05:24.955569. normalized by batch size: 0:00:20.309724 torch version: 1.7.0.dev20200720 testing single gpu testing: batch_size=4 num_workers=0 parallel=False mixed_pres=False pin_memory=False Training complete in: 0:02:50.061025. normalized by batch size: 0:00:42.515261 testing: batch_size=4 num_workers=0 parallel=False mixed_pres=False pin_memory=False Training complete in: 0:02:48.032688. normalized by batch size: 0:00:42.008176 testing: batch_size=4 num_workers=0 parallel=False mixed_pres=True pin_memory=False Training complete in: 0:01:54.984463. normalized by batch size: 0:00:28.746120 testing: batch_size=4 num_workers=8 parallel=False mixed_pres=False pin_memory=False Training complete in: 0:02:50.344483. normalized by batch size: 0:00:42.586124 testing: batch_size=4 num_workers=16 parallel=False mixed_pres=False pin_memory=False Training complete in: 0:02:51.148356. normalized by batch size: 0:00:42.787092 testing: batch_size=4 num_workers=0 parallel=False mixed_pres=False pin_memory=True Training complete in: 0:02:48.677086. normalized by batch size: 0:00:42.169276 testing dp testing: batch_size=16 num_workers=0 parallel=True mixed_pres=False pin_memory=False Training complete in: 0:05:30.977989. normalized by batch size: 0:00:20.686125 testing: batch_size=16 num_workers=8 parallel=True mixed_pres=False pin_memory=False Training complete in: 0:05:26.893676. normalized by batch size: 0:00:20.430856 testing: batch_size=16 num_workers=16 parallel=True mixed_pres=False pin_memory=False Training complete in: 0:05:28.139827. normalized by batch size: 0:00:20.508740 testing: batch_size=16 num_workers=0 parallel=True mixed_pres=False pin_memory=True Training complete in: 0:05:22.767213. normalized by batch size: 0:00:20.172952 testing: batch_size=16 num_workers=0 parallel=True mixed_pres=True pin_memory=False Training complete in: 0:04:26.452442. normalized by batch size: 0:00:16.653278 torch version: 1.5.0 testing ddp testing: batch_size=4 num_workers=0 mixed_pres=False pin_memory=False Training complete in: 0:04:59.752312 testing: batch_size=4 num_workers=0 mixed_pres=False pin_memory=True Training complete in: 0:04:59.236787 testing: batch_size=4 num_workers=8 mixed_pres=False pin_memory=False Training complete in: 0:12:16.935697 torch version: 1.7.0.dev20200720 testing ddp testing: batch_size=4 num_workers=0 mixed_pres=False pin_memory=False Training complete in: 0:05:02.979028 testing: batch_size=4 num_workers=0 mixed_pres=False pin_memory=True Training complete in: 0:05:03.088308 testing: batch_size=4 num_workers=8 mixed_pres=False pin_memory=False Training complete in: 0:11:05.255453 testing: batch_size=4 num_workers=0 mixed_pres=True pin_memory=False Training complete in: 0:05:10.881854 My interpretation Training on a single GPU takes about 2:50 minutes for all parameters except mixed precision, which increases speed to around 2 minutes. So perfect parallelization would mean that the same time would be required with 4 GPUs if every single GPU gets a mini-batch with size 4, correct? DataParallel seems to behave very similar to the hyperparameters, training takes around 5:25 mintues, except for mixed precision, which decreases it to 4:25 minutes. Now to DistributedDataParallel: Increasing the number of workers seems to slow down training by a lot. Mixed precision has no effect on training speed (even though I observed on the GPUs that the required ram was decreased compared to not using it, and similar to the ram required for the mixed precision during DataParallel). This is the first time using pytorch, so if I oversaw anything please let me know. Otherwise I would be interested what caused these effects.
st178311
Hello, lets get to your points one by one DDP and DP is slow My interpretation Training on a single GPU takes about 2:50 minutes for all parameters except mixed precision, which increases speed to around 2 minutes. So perfect parallelization would mean that the same time would be required with 4 GPUs if every single GPU gets a mini-batch with size 4, correct? DataParallel seems to behave very similar to the hyperparameters, training takes around 5:25 mintues, except for mixed precision, which decreases it to 4:25 minutes. Now to DistributedDataParallel: Increasing the number of workers seems to slow down training by a lot. Mixed precision has no effect on training speed (even though I observed on the GPUs that the required ram was decreased compared to not using it, and similar to the ram required for the mixed precision during DataParallel). Your interpretation is correct, indeed, however, in python, whether you use thread based parallelism or process based parallelism, you are faced with extremely high costs: for DP, it is mainly GIL cost for DDP, is is mainly process communication cost. The best way to increase efficiency is “batching”, because that part all happens in C/C++/CUDA domain, therefore, in order to fully display the power of DDP, you must make sure that your model is dealling with about 100800800*3 size of data (about 100 frames of images), even using this much data in a forward process on ResNet probably would take less than a second on powerful GPUs (eg: V100, your GPU), I am not sure how big your model prediction_module.protein_models it is, if it is not large enough, then batch=4 (16/4=4) per process probably is too small, and overhead of inter-process communication would demonstrate its annoying existence in this condition. More workers, more slowly, it is true in python, whether you are using threads or processes, unless there is no communication overhead (for thread, it is GIL, for process, it is repeated serialization & deserialization, inter-process synchronization, etc.) DistributedSampler in this case could also be a major overhead contributor, since internally the DataLoader will use _MultiProcessingDataLoaderIter, which uses a inter process queue to get data from sub-processes, if your data is not big enough and number is large, then it is very likely that repeated serialization-deserialization would contribute a lot to the slowiness, because for cpu tensors, they have to be moved to the shared memory first, then the handle will be serialized, there is no way for you to avoid this nightmare, if you are using the inbuit datasampler. It is possible for you to customize your own dataloader, maybe load all data at once to your GPUs, (I believe V100 has enough memory to hold all of them, and you should have multiple V100s), however, it would require a huge quantity of time to debug your impementation. In summary: Maybe…, you should increase your batch size. Live with this. Make your own implementation.
st178312
Thanks for the answer! iffiX: DistributedSampler in this case could also be a major overhead contributor, since internally the DataLoader will use _MultiProcessingDataLoaderIter , which uses a inter process queue to get data from sub-processes I was just testing some bottlenecks in my code an it indeed seems like the DistributedSampler is a major culprit, when used with shuffle=True. This might be because the dataset I was using in the tests has a length of 320 million samples. And it might also be the reason why I wasnt using bigger batch sizes because 10 GB of RAM seems to be occupied by the DistributedSampler. So I decided to use the sampler on a single GPU and look at the effects. code: import torch import os from transformers import AdamW, BertConfig, TrainingArguments, Trainer from datetime import datetime from prediction_module import path_vocab, path_storage from prediction_module.protein_datasets import ProteinBertMaskedLMDataset from prediction_module.protein_models import ProteinBertForMaskedLM def train_single(dist_sampler=False, n_steps=100, shuffle=False): rank = 0 dataset = ProteinBertMaskedLMDataset( path_vocab, os.path.join(path_storage, "data", "uniparc", "uniparc_train_sorted.h5"), ) config = BertConfig( vocab_size=dataset.tokenizer.vocab_size, max_position_embeddings=dataset.input_size ) model = ProteinBertForMaskedLM(config) model.cuda(rank) model.train() optimizer = AdamW(model.parameters(), lr=1e-5) # create optimizer # Data loading code sampler = DistributedSampler(dataset, 1, rank, shuffle=shuffle) if dist_sampler else None loader = torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=False, num_workers=0, pin_memory=False, sampler=sampler, collate_fn=dataset.collate_fn) print("start trainig") start = datetime.now() for epoch in range(1): for i, inputs in enumerate(loader): for k, v in inputs.items(): if isinstance(v, torch.Tensor): inputs[k] = v.cuda(rank, non_blocking=True) optimizer.zero_grad() outputs = model(**inputs) loss = outputs[0] loss = loss.mean() loss.backward() optimizer.step() if i >= n_steps: break print("Training complete in:", str(datetime.now() - start)) if __name__ == "__main__": train_single() train_single(True) train_single(True, shuffle=True) The results: start trainig Training complete in: 0:00:07.233262 start trainig Training complete in: 0:00:19.662416 start trainig Training complete in: 0:03:29.437496 Additionally, the RAM on the GPU required for the first two function calls is about 3,5 GB and for the last one about 14 GB. I am not sure what is going on here, but that seems very weird.
st178313
I think the data loader might have cached something resulting in that 10.5 GB extra memory, I am not very familiar with its internal design so this answer might be wrong. Anyway, you have 350GB+ memory, why worry about that?
st178314
Because the memory is allocated on the GPU. And 10.5 GB random allocation on the GPU is not that nice. (edited the previous post to make that more clear)
st178315
Weird, seems that multiple processes have occupied your GPU, could you please post the result of nvidia-smi ?
st178316
Here is the output: during the first two runs it looks like this: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-PCIE... Off | 00000000:18:00.0 Off | 0 | | N/A 44C P0 38W / 250W | 2857MiB / 16160MiB | 0% E. Process | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-PCIE... Off | 00000000:3B:00.0 Off | 0 | | N/A 43C P0 27W / 250W | 12MiB / 16160MiB | 0% E. Process | +-------------------------------+----------------------+----------------------+ | 2 Tesla V100-PCIE... Off | 00000000:86:00.0 Off | 0 | | N/A 41C P0 27W / 250W | 12MiB / 16160MiB | 0% E. Process | +-------------------------------+----------------------+----------------------+ | 3 Tesla V100-PCIE... Off | 00000000:AF:00.0 Off | 0 | | N/A 41C P0 28W / 250W | 12MiB / 16160MiB | 0% E. Process | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 284149 C python 2845MiB | +-----------------------------------------------------------------------------+ During the last run: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-PCIE... Off | 00000000:18:00.0 Off | 0 | | N/A 46C P0 43W / 250W | 14527MiB / 16160MiB | 0% E. Process | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-PCIE... Off | 00000000:3B:00.0 Off | 0 | | N/A 42C P0 27W / 250W | 12MiB / 16160MiB | 0% E. Process | +-------------------------------+----------------------+----------------------+ | 2 Tesla V100-PCIE... Off | 00000000:86:00.0 Off | 0 | | N/A 41C P0 27W / 250W | 12MiB / 16160MiB | 0% E. Process | +-------------------------------+----------------------+----------------------+ | 3 Tesla V100-PCIE... Off | 00000000:AF:00.0 Off | 0 | | N/A 41C P0 28W / 250W | 12MiB / 16160MiB | 0% E. Process | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 284149 C python 14515MiB | +-----------------------------------------------------------------------------+
st178317
Thanks @iffiX for covering distributed training questions! @siheming I wonder if those are cached blocks. Can you print memory summary using torch.cuda.memory_summary 1?
st178318
mrshenli: Can you print memory summary using torch.cuda.memory_summary ? when should I call it ? as soon as the memory is filled or after training? or does it not matter? Also I wanted to come back to this question because I have not seen a satisfying answer yet or understood why this might be the case: siheming: Mixed precision has no effect on training speed (even though I observed on the GPUs that the required ram was decreased compared to not using it, and similar to the ram required for the mixed precision during DataParallel).
st178319
siheming: when should I call it ? as soon as the memory is filled or after training? or does it not matter? It depends on when do you wants to inspect the memory usage. It prints current cached memory, allocated memory, etc. I would try to print it every few iterations. Mixed precision has no effect on training speed (even though I observed on the GPUs that the required ram was decreased compared to not using it, and similar to the ram required for the mixed precision during DataParallel). Are you using PyTorch v1.6+? I saw the DDP + AMP example is only available in v1.6+ docs: https://pytorch.org/docs/1.5.0/notes/amp_examples.html https://pytorch.org/docs/master/notes/amp_examples.html 1 cc the author of torch.cuda.amp @mcarilli
st178320
mrshenli: It depends on when do you wants to inspect the memory usage. It prints current cached memory, allocated memory, etc. I would try to print it every few iterations. I’ll get on that tomorrow and post the results. mrshenli: Are you using PyTorch v1.6+? I saw the DDP + AMP example is only available in v1.6+ docs: https://pytorch.org/docs/1.5.0/notes/amp_examples.html https://pytorch.org/docs/master/notes/amp_examples.html cc the author of torch.cuda.amp @mcarilli yes I tested both 1.5 and 1.7, and saw speed up when using 1.7 with AMP in both single GPU mode and DataParallel mode, but not in DistributedDataParallel. Should I @ him aswell or is yours sufficient?
st178321
mrshenli: I would try to print it every few iterations. I adjusted the code like to print before during and after training: print(torch.cuda.memory_summary()) sampler = torch.utils.data.DistributedSampler(dataset, 1, rank, shuffle=shuffle) if dist_sampler else None loader = torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=False, num_workers=0, pin_memory=False, sampler=sampler, collate_fn=dataset.collate_fn) print("start trainig") start = datetime.now() for epoch in range(1): for i, inputs in enumerate(loader): for k, v in inputs.items(): if isinstance(v, torch.Tensor): inputs[k] = v.cuda(rank, non_blocking=True) if i % 10 == 0: print(torch.cuda.memory_summary()) optimizer.zero_grad() outputs = model(**inputs) loss = outputs[0] loss = loss.mean() loss.backward() optimizer.step() if i >= n_steps: break print("Training complete in:", str(datetime.now() - start)) print(torch.cuda.memory_summary()) and then called the function three times again like before: train_single() train_single(dist_sampler=True) train_single(dist_sampler=True, shuffle=True) This is what happens during the third function call (DistributedSampler with shuffle=True). The other outputs did not show any irregularities so i’ll leave them out (and due to the character limit…). First output is before the dataloader is build, second print is after the first Iteration, third after the tenth iteration |===========================================================================| | PyTorch CUDA memory summary, device ID 0 | |---------------------------------------------------------------------------| | CUDA OOMs: 0 | cudaMalloc retries: 0 | |===========================================================================| | Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed | |---------------------------------------------------------------------------| | Allocated memory | 340029 KB | 1692 MB | 350966 MB | 350634 MB | | from large pool | 339456 KB | 1677 MB | 321965 MB | 321633 MB | | from small pool | 573 KB | 133 MB | 29001 MB | 29001 MB | |---------------------------------------------------------------------------| | Active memory | 340029 KB | 1692 MB | 350966 MB | 350634 MB | | from large pool | 339456 KB | 1677 MB | 321965 MB | 321633 MB | | from small pool | 573 KB | 133 MB | 29001 MB | 29001 MB | |---------------------------------------------------------------------------| | GPU reserved memory | 1842 MB | 1842 MB | 1842 MB | 0 B | | from large pool | 1700 MB | 1700 MB | 1700 MB | 0 B | | from small pool | 142 MB | 142 MB | 142 MB | 0 B | |---------------------------------------------------------------------------| | Non-releasable memory | 51138 KB | 215041 KB | 393608 MB | 393558 MB | | from large pool | 49664 KB | 186288 KB | 362929 MB | 362880 MB | | from small pool | 1474 KB | 36244 KB | 30679 MB | 30678 MB | |---------------------------------------------------------------------------| | Allocations | 204 | 1077 | 251092 | 250888 | | from large pool | 75 | 459 | 126113 | 126038 | | from small pool | 129 | 756 | 124979 | 124850 | |---------------------------------------------------------------------------| | Active allocs | 204 | 1077 | 251092 | 250888 | | from large pool | 75 | 459 | 126113 | 126038 | | from small pool | 129 | 756 | 124979 | 124850 | |---------------------------------------------------------------------------| | GPU reserved segments | 156 | 156 | 156 | 0 | | from large pool | 85 | 85 | 85 | 0 | | from small pool | 71 | 71 | 71 | 0 | |---------------------------------------------------------------------------| | Non-releasable allocs | 20 | 177 | 133360 | 133340 | | from large pool | 19 | 80 | 87375 | 87356 | | from small pool | 1 | 98 | 45985 | 45984 | |===========================================================================| start trainig |===========================================================================| | PyTorch CUDA memory summary, device ID 0 | |---------------------------------------------------------------------------| | CUDA OOMs: 0 | cudaMalloc retries: 0 | |===========================================================================| | Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed | |---------------------------------------------------------------------------| | Allocated memory | 340079 KB | 1692 MB | 350966 MB | 350634 MB | | from large pool | 339456 KB | 1677 MB | 321965 MB | 321633 MB | | from small pool | 623 KB | 133 MB | 29001 MB | 29001 MB | |---------------------------------------------------------------------------| | Active memory | 340079 KB | 1692 MB | 350966 MB | 350634 MB | | from large pool | 339456 KB | 1677 MB | 321965 MB | 321633 MB | | from small pool | 623 KB | 133 MB | 29001 MB | 29001 MB | |---------------------------------------------------------------------------| | GPU reserved memory | 1842 MB | 1842 MB | 1842 MB | 0 B | | from large pool | 1700 MB | 1700 MB | 1700 MB | 0 B | | from small pool | 142 MB | 142 MB | 142 MB | 0 B | |---------------------------------------------------------------------------| | Non-releasable memory | 51089 KB | 215041 KB | 393608 MB | 393558 MB | | from large pool | 49664 KB | 186288 KB | 362929 MB | 362880 MB | | from small pool | 1425 KB | 36244 KB | 30679 MB | 30678 MB | |---------------------------------------------------------------------------| | Allocations | 207 | 1077 | 251095 | 250888 | | from large pool | 75 | 459 | 126113 | 126038 | | from small pool | 132 | 756 | 124982 | 124850 | |---------------------------------------------------------------------------| | Active allocs | 207 | 1077 | 251095 | 250888 | | from large pool | 75 | 459 | 126113 | 126038 | | from small pool | 132 | 756 | 124982 | 124850 | |---------------------------------------------------------------------------| | GPU reserved segments | 156 | 156 | 156 | 0 | | from large pool | 85 | 85 | 85 | 0 | | from small pool | 71 | 71 | 71 | 0 | |---------------------------------------------------------------------------| | Non-releasable allocs | 20 | 177 | 133360 | 133340 | | from large pool | 19 | 80 | 87375 | 87356 | | from small pool | 1 | 98 | 45985 | 45984 | |===========================================================================| |===========================================================================| | PyTorch CUDA memory summary, device ID 0 | |---------------------------------------------------------------------------| | CUDA OOMs: 0 | cudaMalloc retries: 0 | |===========================================================================| | Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed | |---------------------------------------------------------------------------| | Allocated memory | 1328 MB | 11131 MB | 494742 MB | 493413 MB | | from large pool | 1326 MB | 11128 MB | 465663 MB | 464337 MB | | from small pool | 2 MB | 133 MB | 29078 MB | 29075 MB | |---------------------------------------------------------------------------| | Active memory | 1328 MB | 11131 MB | 494742 MB | 493413 MB | | from large pool | 1326 MB | 11128 MB | 465663 MB | 464337 MB | | from small pool | 2 MB | 133 MB | 29078 MB | 29075 MB | |---------------------------------------------------------------------------| | GPU reserved memory | 13646 MB | 13646 MB | 13646 MB | 0 B | | from large pool | 13504 MB | 13504 MB | 13504 MB | 0 B | | from small pool | 142 MB | 142 MB | 142 MB | 0 B | |---------------------------------------------------------------------------| | Non-releasable memory | 158950 KB | 3717 MB | 474414 MB | 474259 MB | | from large pool | 157520 KB | 3717 MB | 443648 MB | 443494 MB | | from small pool | 1430 KB | 35 MB | 30766 MB | 30765 MB | |---------------------------------------------------------------------------| | Allocations | 816 | 1077 | 265399 | 264583 | | from large pool | 297 | 496 | 134867 | 134570 | | from small pool | 519 | 756 | 130532 | 130013 | |---------------------------------------------------------------------------| | Active allocs | 816 | 1077 | 265399 | 264583 | | from large pool | 297 | 496 | 134867 | 134570 | | from small pool | 519 | 756 | 130532 | 130013 | |---------------------------------------------------------------------------| | GPU reserved segments | 297 | 297 | 297 | 0 | | from large pool | 226 | 226 | 226 | 0 | | from small pool | 71 | 71 | 71 | 0 | |---------------------------------------------------------------------------| | Non-releasable allocs | 109 | 191 | 140979 | 140870 | | from large pool | 74 | 154 | 92152 | 92078 | | from small pool | 35 | 98 | 48827 | 48792 | |===========================================================================|
st178322
@mrshenli, I think I figured out what is going on. The DataLoader I am using has dynamic clipping and the inputs are sorted by input length. So without shuffling only small inputs are used (in the ballpark of shape(4, 100)), while with shuffling much larger inputs are being used (around shape(4,1000)). And I would guess this could also explain some of the speed difference in training. However, I am still slightly confused why a SequentialSampler is so much faster than a DistributedSampler with num_replicates=1 and shuffling=False. So I guess only my question about amp is left.
st178323
If you’re strongly dataloader bound in the DDP case, any Amp speedup may be negligible/not observable. One simple thing you can try is, don’t use the dataloader at all. In each DDP process, create a single dummy batch of data and use that through all the timing iterations. First try it without amp, which gives an idea how strongly dataloader bound you are overall. For example, if switching to dummy data gives a 2X speedup right away, you know the dataloader is a big bottleneck. Then try it with amp, and see if you observe a speedup relative to the above dummy data+no amp case, which gives an idea if Amp is working.
st178324
mcarilli: One simple thing you can try is, don’t use the dataloader at all. In each DDP process, create a single dummy batch of data and use that through all the timing iterations. Thanks for the suggestion I will try that. Is there some rough number on how fast my dataloder should be in seconds or relative to the model? Also thanks to everyone for helping out and explaining!
st178325
mcarilli: First try it without amp, which gives an idea how strongly dataloader bound you are overall. For example, if switching to dummy data gives a 2X speedup right away, you know the dataloader is a big bottleneck. Then try it with amp, and see if you observe a speedup relative to the above dummy data+no amp case, which gives an idea if Amp is working. Here are the results of the test. I tested the native speed of my model, then the speed with dummy data and then dummy data and Amp: testing ddp testing: model=mlm batch_size=4 num_workers=0 mixed_pres=False pin_memory=False use_dummy_data=False Training complete in: 0:03:47.287354 model=mlm batch_size=4 num_workers=0 mixed_pres=False pin_memory=False use_dummy_data=True Training complete in: 0:02:40.539612 testing: model=mlm batch_size=4 num_workers=0 mixed_pres=True pin_memory=False use_dummy_data=True Training complete in: 0:02:41.635868 so while my dataloader seems to slow down the training quite a bit I still see no speedup using Amp. I very lazily updated my training loop to: for epoch in range(epochs): if not use_dummy_data: for i, inputs in enumerate(train_loader): for k, v in inputs.items(): if isinstance(v, torch.Tensor): inputs[k] = v.cuda(rank, non_blocking=True) optimizer.zero_grad() if mixed_pres: with torch.cuda.amp.autocast(): outputs = model(**inputs) loss = outputs[0] loss = loss.mean() # Backward and optimize scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() else: outputs = model(**inputs) loss = outputs[0] loss = loss.mean() loss.backward() optimizer.step() if i >= n_steps: break else: inputs = next(enumerate(train_loader))[1] inputs.to(f"cuda:{rank}") for i in range(n_steps): optimizer.zero_grad() if mixed_pres: with torch.cuda.amp.autocast(): outputs = model(**inputs) loss = outputs[0] loss = loss.mean() # Backward and optimize scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() else: outputs = model(**inputs) loss = outputs[0] loss = loss.mean() loss.backward() optimizer.step()
st178326
Hi, I want to parallelize the inner loop of MAML. Each inner loop of the MAML will produce individual loss along with individual gradient graphs, and after the iteration, I have to aggregate the losses followed by backpropagation. My naive idea is replacing the loop to map. To do this, I guess I need to aggregate the loss from multiple threads. (e.g. torch.mean(torch.stack(list_of_loss_from_multiple_threads)) Is it possible to aggregate graphs from worker threads and then do the backprop at once? Thanks
st178327
Hi, it’s pretty tough to give any concrete advice without first knowing what exactly you’re doing and what you’ve tried. Would you mind posting a snippet of code that indicates the inner loop that you’d like to parallelize, and any instructions needed to run the code? Thanks!
st178328
I actually want to try this out, I am afraid that the gains in parallelizing the inner loop might be outweighed by communication overhead, were you able to see speed ups ?
st178329
further, you dont have to aggregate the loss from the different threads, you could compute the gradients within each thread and then aggregate the gradients from different threads.
st178330
Sorry, I gave up parallelizing MAML. So I don’t have results that might help your concern.
st178331
zzoon91: Is it possible to aggregate graphs from worker threads and then do the backprop at once? Yes, this is possible, and this is how DataParallel is implemented. The parallel_apply() in the code below will launch multiple threads with each creating their own autograd graph. github.com pytorch/pytorch/blob/2de549518e7f0ce2820650b401cd21a9901c74a9/torch/nn/parallel/data_parallel.py#L147-L162 2 def forward(self, *inputs, **kwargs): if not self.device_ids: return self.module(*inputs, **kwargs) for t in chain(self.module.parameters(), self.module.buffers()): if t.device != self.src_device_obj: raise RuntimeError("module must have its parameters and buffers " "on device {} (device_ids[0]) but found one of " "them on device: {}".format(self.src_device_obj, t.device)) inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) if len(self.device_ids) == 1: return self.module(*inputs[0], **kwargs[0]) replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) outputs = self.parallel_apply(replicas, inputs, kwargs) return self.gather(outputs, self.output_device)
st178332
Hi, I am a looking into the Distributed RPC API. In particular in this example https://pytorch.org/tutorials/intermediate/rpc_tutorial.html#distributed-rnn-using-distributed-autograd-and-distributed-optimizer 11. I have a question when we have to scale the model into multiple machines. Think of an instance where we have 16 layers (L1 — L16) where we have 4 machines each with 4 GPU devices. So my model will scale into all 16 GPUs in 4 machines such that each layer occupies the memory of a one GPU device. Assume a theoretical scenario memory is enough for computation. When I need to do such a task, my training script must be written in such a way that if the original model was M, now I have M1 – M16 smaller models which depends upon the output of the previous model in the sequence. I am not sure whether this is the best way to do this. If this is wrong, please explain the best practices with the RPC API. Furthermore, here M1–M4 makes sense and that can be done by to(device) and when I have to send the input from M4 to M5, M4 is in GPU:3 of machine 1, M5 is in the GPU:0 of machine 2, I need to some how use an RPC call and send that data to the machine 2. This is the same case for all boundary conditions. Data could be sent some how via a synchronization mechanism. Is this something possible with the existing APIs. I am not quite clear how DistributedOptimizer and Distributed Autograd could handle this. These are some questions I have about the distributed model parallel. Thank You, Vibhatha
st178333
Hey @Vibhatha_Abeykoon, thanks for the question, this actually relates to several WIP projects that we are working on now. When I need to do such a task, my training script must be written in such a way that if the original model was M, now I have M1 – M16 smaller models which depends upon the output of the previous model in the sequence. In this case, you need 4 instead of 16 smaller models, and within each model you can use Tensor.to(device) to move data across GPUs as you mentioned below. For pipeline parallelism using RPC, this tutorial 10 can serve as a reference (will be released with v1.6). I am not sure whether this is the best way to do this. If this is wrong, please explain the best practices with the RPC API. This is not the most convenient way to support pipeline parallelism. RPC is a lower-level API that offers flexibility but would require additional application code to orchestrate. One of the projects that we are looking into is to provide a higher-level API, e.g., a DistributedPipelineParallel (DPP) (similar to the DistributedDataParallel) which, ideally, can automatically divide the original model and place model shards (maybe) by using additional configuration hints or specific model structure (e.g., nn.Sequential). But this is still in discussion and no committed release date for it yet. Please do comment if you have suggestions or requirements on this feature. I need to some how use an RPC call and send that data to the machine 2. This is the same case for all boundary conditions. If you want the distributed autograd to automatically take care of the backward pass across machines, then yes, you will need to use RPC to send the intermediate output form machine 1 to machine 2. As of v1.6, RPC only accepts CPU tensors, so you will need to first move the tensor from cuda:3 to cpu on machine 1 and then move the received tensor from cpu to cuda:0 on machine 2. We explicitly added this restriction to avoid unintentional device mapping errors through RPC. And we are working on a new device placement API (similar to the map_location in torch.load) to make this easier, where application can define default device mappings between each pair of nodes and directly pass GPU tensors to RPC. We hope we can get this done in v1.7. Data could be sent some how via a synchronization mechanism. What do you mean by “a synchronization mechanism” here? Is this something possible with the existing APIs. I am not quite clear how DistributedOptimizer and Distributed Autograd could handle this. Yep, the tutorial linked above shows an example.
st178334
@mrshenli This tutorial is wonderful. I was writing one and I think I can learn a lot from yours. I had questions when I was attempting this. What I meant from synchronization is as the Model-Shard in Machine 1 needs to complete compute to start the Model-Shard2, even in pipeline case. Please correct me if I am wrong. So the Machine 2 Model-Shard2 must wait, until it gets the weights from Machine 2. And again, I am still going through your tutorial and may have answers in there. One more thing, is how are we deploying this multi-machine model parallel module? Will there be modifications to torch.distributed.launch? What is the current method to launch this as we do for DDP? The Pipeline parallel API would be just great. I was attempting to wrap this with Torchgpipe or Pipedream and I also felt like it is better if it can come within the PyTorch APIs. I have a few suggestions, if we can make use of to(device) call into to(machine:device) kind of an API endpoint, it will be much easier to work, but I am not quite sure how the changes should reflect internally. The DPP module needs a few features. How to partition the model, (a profiler based auto-partitioned or a manual one so that user can say how to partition). Model partition in a manual way and saying .to(device) is not going to work when we have to deal with complex and large models. So if this could be handled internally, it will be ideal for all users. From the knowledge of using Torchgpipe 3, Rematerialization as a custom option within DPP would be a great option. There are couple reasons for this, some applications need performance using pipelining rather than saving memory. So in this case one could have the ability to turn it off and on depending on the training job. DPP could have a flag rematerialztion=False/True to make it disabled or enabled for the user. With the Pipedream 2 work, it was very clear that the multi-machine involvement could be very useful for training and their profiler usage is important in getting a better DPP. Enabling checkpointing internally for DPP would be easier as handling this manually could be troublesome. When the shards are distributed across machines the checkpoint itself needs to be a distributed entity (in my understanding). These could be great additions to DPP if possible. Regrading the tutorial: PP Tutorial 2 The ResnetBase and self._lock are not clear. Does the lock represents thread lock or something else? Is it possible to access the code for ResnetBase? Thank You, Vibhatha
st178335
Vibhatha_Abeykoon: What I meant from synchronization is as the Model-Shard in Machine 1 needs to complete compute to start the Model-Shard2, even in pipeline case. Please correct me if I am wrong. Yep, this is correct. That tutorial uses RRef.to_here() to block wait for the result. The downside is that this would block one RPC thread until to_here() returns. If this is a concern, the async_execution decorator can help. [tutorial 2] One more thing, is how are we deploying this multi-machine model parallel module? Will there be modifications to torch.distributed.launch? What is the current method to launch this as we do for DDP? We don’t have a helper launching script for RPC yet as of v1.6. The RPC processes will need to be launched manually or programmably in application code. Added https://github.com/pytorch/pytorch/issues/40974 1 to track. I have a few suggestions, if we can make use of to(device) call into to(machine:device) kind of an API endpoint, it will be much easier to work, but I am not quite sure how the changes should reflect internally. Right! This aligns with the remote device feature that we would love to build on top of RPC, but we don’t have bandwidth to cover that yet. It won’t be very hard to convert every operation of a remote device tensor into an RPC invocation, however that will be too slow due to the per-op comm overhead. Ideally, we should have a remote Tensor type that can do op fusing when possible, similar to lazy tensor. But as this would require a lot of effort and we haven’t seen too many requests for this yet, this feature didn’t make into our top priorities for now. We will come back to re-evaluate after the next release. How to partition the model, (a profiler based auto-partitioned or a manual one so that user can say how to partition). Model partition in a manual way and saying .to(device) is not going to work when we have to deal with complex and large models. So if this could be handled internally, it will be ideal for all users. With the Pipedream 1 work, it was very clear that the multi-machine involvement could be very useful for training and their profiler usage is important in getting a better DPP. Thanks a lot for all the suggestions!! Totally agree. Profiling is great and can definitely provide an easier entry point, especially when the application does not need to squeeze out the last bit of performance. For more perf-centric use cases, we might still want to allow users to hand-craft model partitioning and placement, maybe, by accepting some hints/configs. Enabling checkpointing internally for DPP would be easier as handling this manually could be troublesome. When the shards are distributed across machines the checkpoint itself needs to be a distributed entity (in my understanding). Exactly, this is a feature gap in RPC. We might be able to add checkpointing support to the WIP RemoteModel feature and build DPP on top.
st178336
@mrshenli Thank you so much for this detailed explanation. I will try to design on top of what is offered from the PyTorch APIs. The plan you suggested is great and hope to use these in near future. Regrading the tutorial: PP Tutorial 1 The ResnetBase and self._lock are not clear. Does the lock represents thread lock or something else? Is it possible to access the code for ResnetBase? Thank you, Vibhatha.
st178337
The ResnetBase and self._lock are not clear. Does the lock represents thread lock or something else? Ah, thanks for the catch. Yep, this is a thread lock to prevent race. The full example code is here: https://github.com/pytorch/examples/blob/master/distributed/rpc/pipeline/main.py 5 Let me add the missing ResNetBase to the tutorial.
st178338
@mrshenli Just following up with you based on the performance factor. For distributed model parallelism, could MPI-collective communication be a better choice than distributed RPC? I mean these are two different models designed to serve two different purposes. But, at the end of the day what we would be doing is sending or receiving data from one point to another point. In terms of performance does PyTorch Distributed RPC outperforms MPI-Collectives (especially ISend/IRecv, Send/Recv)? Is this something PyTorch community already considered and decided to go with RPCs instead of MPI libraries? But I understand the currently distributed optimizer, Autograd and those extensible components have been written to support RPC. But does MPI stand a chance here?
st178339
Hey @Vibhatha_Abeykoon We will announce a new RPC backend in v1.6, which is called TensorPipe 4. This is a P2P comm library and is designed to automatically figure out the best comm media between two RPC workers, e.g., shm, tcp, nvlink, ib, etc. (This is still WIP) We will gradually make TensorPipe the default RPC backend and retire ProcessGroup RPC backend due to the perf reasons as you noticed. The original reason for adding ProcessGroup RPC backend is to have a working comm module to unblock other parts of the system, and also buy us time to design better solutions. Regarding MPI, we probably will not develop an RPC backend on top of MPI but we do welcome OSS contributions or it might be possible to add MPI as a channel type in TensorPipe. One downside of using MPI is that there are different implementations does not seem to be one implementation that rules all use cases. That’s also one reason why PyTorch does not include MPI as a submodule but require the users to provide a MPI installation and compile from source. Is there any specific reason for requesting MPI RPC backend?
st178340
Hey @mrshenli. The main reason is that there are tons of Scientific Applications written on MPI and it would be really hard to port them back to a different backend. These applications will do an MPI_Init somewhere at the very beginning of the program. In the early part of the program, there are specific scientific data pre-processing, shallow/complex algorithms applied to pre-process the data. Then comes the DL workload. Such applications are very common in the high-performance computing domain. So supporting MPI-backend could be very vital to support such applications seamlessly without breaking the data pipeline/training. I understand there are many MPI-implementations. But MPI still can be left to the user to install, but the specifications are mostly consistent in most of the MPI implementations. All it needs is a wrapper library to wrap the collective communication calls. PyTorch already has this API in C10D. Please correct me if I am wrong. Tensorpipe seems to be a very interesting project that could glue all these together.
st178341
I see. Technically, it shouldn’t be too hard to let the ProcessGroup RPC backend to use MPI, as it only requires its send/recv features. One options could be adding one field to ProcessGroup RPC backend construction time options, and let users decide whether they want to use Gloo, or NCCL (> 2.7), or MPI. cc @lcw any thoughts on MPI + TensorPipe? Does it make sense to add MPI as a channel type for TensorPipe?
st178342
I don’t understand the argument for using MPI in RPC: what does the fact that other libraries use the MPI API have to do with the RPC library using it under the hood? AFAIK, MPI is not incompatible with Gloo or TensorPipe: the same process can use them both, in different parts of the code. Also, the fact that RPC uses MPI internally does not help with porting MPI code to RPC: it would still have to be rewritten to use the RPC interface. A good reason would be if there was a difference in performance. Have you reason to believe there is? If we were stuck with the ProcessGroup agent only, then I could agree that we should allow it to use MPI instead of Gloo, but as it’s slated to go away in favor of the TensorPipe-based one this change may not end up being so useful. TensorPipe is natively asynchronous, and thus suits really well the use-case of RPC, contrary to Gloo and MPI which are blocking. We have already proven that TensorPipe outperforms Gloo for RPC. It may be different for MPI, as some MPI implementations use specialized backends, but that’s what TensorPipe is also going to do.
st178343
My two cents on the above discussion too: ideally you shouldn’t specialize your code to handle differently the transfers between GPUs on the same node and between different nodes. By doing so you couple your code with your deployment, meaning you need to rewrite some parts to change from 4 GPUs/host to single-GPU hosts and so on. With the TensorPipe agent you will be able to perform RPC calls between GPUs on a node and the data will still be transferred over NVLink just as if you had done t.to(…). So with no performance overhead you get code that is resilient to a topology change.
st178344
@lcw It is not an argument, just asking if this is something possible with the current implementations you have. With MPI asynchronous calls you can still get asynchronous attribute to the code. ISend, IRecv Correct me if I am wrong. lcw: Also, the fact that RPC uses MPI internally does not help with porting MPI code to RPC: it would still have to be rewritten to use the RPC interface. Yes, we have to still re-write, but unless MPI backend support is there, the communication on RPC channels will have different performance. Have you benchmarked the performance of Gloo, TensorPipe RPC vs MPI? The reason for asking the MPI compatibility is that, a program would not only have a training script. It has a data pre-processing functions, training and post-processing based on the trained model. If a system designed with an MPI backend is used to be as the main framework where PyTorch act as a library within the code, in those cases the support from MPI is immensely important. The use cases of using PyTorch is getting complex and complex, I think that is why a library like Tensorpipe is also coming into play. I just wanted to point out a possible usage of MPI within the distribution. lcw: With the TensorPipe agent you will be able to perform RPC calls between GPUs on a node and the data will still be transferred over NVLink just as if you had done t.to(…). So with no performance overhead you get code that is resilient to a topology chang This is really useful for model parallelism and writing complex networks with feedback loops. Is Tensor-pipe going to be a standalone library or is this going to be adopted in torch.distributed.rpc ?
st178345
MPI is not appropriate for serving as a new backend of RPC. Their model are totally different and inherently incompatible with each other. Now the serious explanation, to put it simply: MPI style is tightly coupled, P2P primitives like recv, send, irecv, isend are there simply because you wouldn’t want to introduce an additional library to complete a simple P2P commnication, like collecting a state or log. RPC style is complete decoupled, services are there and you can access it if a process want and have the permission to. Therefore synchronization becomes a disaster because processes are distributed. Tensorpipe is mainly just a smart payload delivery layer, it is there because the important “decision” feature will greatly improve the performance of rpc, since unoptimized rpc libraries such as “GRPC”, “Dubbo” “Swift” does not handle tensors on defferent devices well. It is designed for the distributed scenario. And it can also handle elasticity and dynamic size (Eg: initialize your program with different number of process-roles, esbecially important if you want to add some springness to your application) very well, in this case, MPI is just way too rigid. BTW, distributed applications are complex, you cannot expect pytorch to be any simpler because it is already very simple, its rpc api could be considered “overly simple and crude” if you compare it to an industrial grade rpc framework “Dubbo” designed by Alibaba: image900×674 255 KB So, in a word, please, don’t mix these two things together, they are different.
st178346
You can use MPI to implement rpc, techincally, but the performance could be really bad, for example, in order to send a message of arbitrary length, in MPI you need to send the size to your target, then the target has to allocate the memory, then you can send the payload, since MPI is not a raw connection like tcp or infiniband, you would expect more delay in these two communications, and you have to deal with process failures! MPI will fail if any component process has failed, and that’s why we would like to remove that behavior in rpc, see 88856 5.
st178347
@iffiX I would like to ask you to keep the discussion objective. If you disagree with something, explain your position and keep the discussion alive. While the majority of your post is a great explanation, the first part is unfortunately not.
st178348
Inappropriate apart removed and updated, sorry for any issues cased by the provoking part in the comment. also @Vibhatha_Abeykoon
st178349
@Vibhatha_Abeykoon @mrshenli I am reviewing this topic today and I have a few suggestions. I will use “process/device” (Eg: “worker:0/cuda:0”) as a location descriptor of a tensor, and tensor is the only holder of all data. The first thing is: Vibhatha_Abeykoon: How to partition the model I have a primitive design for this purpose: an assigner and a simple wrapper, assigner currently won’t partition a model auto matically, instead it will just assign user specified partitions based on a series of heuristics (GPU mem, GPU power, CPU mem, CPU power, Model complexity, bandwidth of connection between models), but it could be also reworked to fulfill your purpose, and you just need to wrap all of your submodules in the wrapper, the wrapper just stores input/output location descritors, nothing more. Simply speaking, partitioning just require users to specify the input and output (process/device) requirements for a module/model, and then a smart assigner to assign partitions to process/device. Theoretically a dynamic profiler is much better than a static heuristic assigner, since it actively detects hotspots and try to even the load on all of your nodes, but this introduces additional issues: Is evening the load across nodes increasing the performance? How much cost have the additional transmission introduced? Is the decreasing the load not pushing your GPUs to their full capacity (kernel launching cost should be considered)? So there are possibilities that this solution does not meet with the theoretical standard. I beilieve @mrshenli has studied about this issue, from his profile page. I think many more experiments are needed to determine the best scheme. The second thing is: Vibhatha_Abeykoon: I have a few suggestions, if we can make use of to(device) call into to(machine:device) kind of an API endpoint, it will be much easier to work, but I am not quite sure how the changes should reflect internally. I think it won’t be too defficult if rpc.pair in #41546 2 is implemented, tensor.to("worker:1/cuda:0") is equivalent to # suppose there is a tensor on process "worker:0" and device "cuda:0" of this process # move to process "worker:1" and device "cuda:1" of that process # take care when torch.cuda.set_device is used def pair_and_move_to(tensor, device, uuid): # uuid should be a unique identifier to identify this tensor # could be process_name:tensor_ptr tensor = tensor.to(device) rpc.pair(uuid, tensor) return RRef(tensor) # on worker:0 when .to is invoked: rpc.rpc_sync("worker:1", rpc.pair, args=(tensor, "cuda:1", tensor.some_uuid)) And for implementing Distributed Model Parallel Using Distributed RPC, there are many model parallel methods, it depends on your model, algorithm framework and application. For DDP compatible RPC solutions specifically, I have implementations of a gradient reduction server in my framework, which should be able to do the exact same thing as DDP does, however, this server implementation is also based on new API RFC #41546 2, which haven’t been implemented in pytorch. I have made a simple wrapper upon current primitive RPC APIs for this RFC, it is not efficient since two primitive RPC requests have to be made per wrapped high level RPC API, but it is tested and workable, if you would like to take a look. From my personal point of view, if torch could provide a way to “expose” a resource or service upon the RPC module, even RemoteModule could be easily implemented, since it is basically create a module on a remote process, then expose its “__call__()” method as a service on the global scope, #41546 2 could solve this problem. Summary: You can achieve all functions, using current torch APIs, if you don’t mind 20% ~ 30% efficiency loss, and spend a little?(much) time to construct your wheel, if you don’t, you can also use mine . It would be definetly better if torch could just provide these functions, with more optimizations. And @mrshenli @Kiuk_Chung, please chime in and offer some feedback and precious suggestions on #41546 2, there are some torchelastic issues I am not very familiar with and need some help
st178350
Oh, and about: Vibhatha_Abeykoon: deploying this multi-machine model parallel module? Will there be modifications to torch.distributed.launch? What is the current method to launch this as we do for DDP? That’s complex, my personal experience says that you should try to split and group your process functionalities, for example, grouping them by “role”: (Image from RFC #41425 2) This idea comes from “micro services”. It makes your application logic much more clearer to understand. RFC proposal #41546 also contains an automatic role-based launcher implementation to address this issue. However, role based design is not compatiable with: if __name__ == "__main__": ... tensor.to("worker:0/cuda:1") # do some computation tensor.to("worker:1/cuda:0") because you are manually specifying every destination and location,
st178351
I’m trying to understand what the differences are in using DataParallel vs increasing the num_workers in the DataLoader. It seems that DataParallel divides the batch uniformly across the available GPUs, allowing the forward and backward passes to be done on each split up batch in parallel. But what does increasing the num_workers in DataLoader do? That is, does each process generated to consume a new batch?
st178352
If num_worker is > 0 then that much amount of separated processes will be spawned to do the data loading job. Each process will generate a single batch. This prevents bottleneck due to dataloading as multiple processes are working on it compared to num_workers=0 where after the forward pass gpu waits for the next batch of data to be loaded.
st178353
I’m trying to get DistributedDataParallel to work on a code, using pytorch/fairseq 6 as a reference implementation. I’m finding the implementation there difficult to comprehend. I’ve opened an issue 7 for the same. Below is a (hopefully) complete relevant extract. The uncommented segment I’ve already got working and loss in converging. def train_step(self, sample): self.model.train() self._optimizer.zero_grad() sample = move_to(sample, self.device) loss, batch_sizes = self.model(sample) # 1: Is the below done implicitly # seems to be missing in fairseq code. # all-gather([loss, batch_sizes]) # loss = loss.sum()/batch_sizes.sum() loss.backward() # 2: Something similar to the following # exist. what is happening here? # for p in parameters-optimized: # p.grad = p.grad*distributed_world_size/batch_sizes.sum() self._optimizer.step() return loss.item() My concerns are: Shouldn’t I be doing an all gather as indicated in code? Is this done implicitly? What is happening in the second segment?
st178354
Hi @jerinphilip, Why would you need a gather on the loss? I can see how you might think the loss aggregation is needed for distributed training but what happens is the following. Each process computes its own output, using its own input, with its own activations, and computes its own loss. Then on loss.backward() all processes reduce their gradients. As loss.backward() returns, the gradients of your model parameters will be the same, and the optimizer in each process will perform the exact same update to the model parameters. This normalizes the gradients w.r.t. the total number of processes. If you end up using torch.nn.parallel.DistributedDataParallel, this is already done for you. It is possible this is still a part of fairseq as earlier versions had a custom approach for distributed data parallelism, whereas newer versions can use the upstream wrapper directly (IIRC).
st178355
Hi! I need an advice. I have 4 processes/gpus with DDP. Should I implement Ioss reduction by sum (using all_reduce) before backward pass, or is it enough just for gradients to be automatically averaged by DDP? Could increasing the learningrate by a factor of x4 compensate for the division by number of gpus done by the averaging? I am trying to get a DDP run equivalent to Dataparallel.
st178356
Andras_Iani: Should I implement Ioss reduction by sum (using all_reduce) before backward pass, or is it enough just for gradients to be automatically averaged by DDP? It is not necessary to use another allreduce to sum all loss. And additional allreduce might have considerable negative impact on training speed. Could increasing the learningrate by a factor of x4 compensate for the division by number of gpus done by the averaging? This is not guaranteed and the loss function itself also plays a role here. See this discussion: Should we split batch_size according to ngpu_per_node when DistributedDataparallel 23 I am trying to get a DDP run equivalent to Dataparallel. There is a subtle difference between DP and DDP. IIUC, with DP, the grads from replicated models are accumulated (i.e., sum) into the param.grad field in the original model, but DDP’s gradient is averaged. Not 100% confident, but I feel if we would like to let DDP behave as similar to DP as possible, we probably should multiple DDP’s result gradient by world_size. Whether that is the same as using 4X learning rate, might depend on the optimizer algorithm.
st178357
I am working with fcos loss. The authors of fcos treat the case of DDP and implement reduction of the loss components inside the loss script. I should get rid of that part of their code then and do not use reduction before backward. I will use reduction just for plotting the loss values (after backward) in the training script. Is it ok in your opinion? Thanks again!
st178358
Andras_Iani: I will use reduction just for plotting the loss values (after backward) in the training script. Yep, this should be OK.
st178359
Thank you for the useful explanations. From the discussion above I understand that the reason why one shouldn’t do an all_gather sum of the losses when training Distributed Data Parallel mode is that these all gather operations can slow down the process. Are there any other reasons why the loss tensors should not be summed other than performance reasons? I ask this because in case the loss tensors are small, if an all_gather sum is performed when computing the losses, this will result in identical losses for all processes. Therefore gradient averaging over processes will simply divide the losses by the number of processes. This has the advantage of mimicking the behavior of DataParallel and of providing consistent results independently of the number of processes being run without the need to adjust learning rates, batch sizes, etc. In short, when the cost of doing an all_gather sum of the losses is low, are there any other reasons beyond performance not to do it? And isn’t the consistent behavior independently of the number of processes an advantage? Thank you
st178360
KikoAumond: I ask this because in case the loss tensors are small, if an all_gather sum is performed when computing the losses, this will result in identical losses for all processes. Therefore gradient averaging over processes will simply divide the losses by the number of processes. The reason this is not sufficient is because the gradient computation depends on both loss and activation. And the activation depends on the input data, which is different in all processes. Therefore, even if loss is communicated, you will still need to communicate either gradients or activation to make sure all model parameters in all processes are consistent. Otherwise, if only communicating loss and then do backward locally, models from different processes might diverge.
st178361
Hi, According to NCCL documentation, since NCCL 2.7 Point-to-point communication can be achieved using ncclSend and ncclRecv. However in Pytorch, the newest stable version still doesn’t support send and receive when using NCCL as backend. I’m wondering is there anyway to achieve point to point communication between GPUs in Pytorch? And is there any way to integrate ncclSend and ncclRecv in Pytorch distributed? Thanks.
st178362
Hey @Yi_Zhang, we are working on adding P2P to NCCL ProcessGroup backend. We just bumped up the NCCL submodule version in https://github.com/pytorch/pytorch/pull/41608 45. For now, to work around it, you can create a sub group of 2 ranks, and then use dist.broadcast(tensor, src, group=sub_group) to mimic P2P send/recv. PipeDream is already using that. If you need general P2P support, you could try the RPC API 25. A caveat is that we are still working on improving support for GPU tensors. https://github.com/pytorch/pytorch/issues/41369 22
st178363
How does the DistributedSampler (together with ddp) split the dataset to different gpus? I know it will split the dataset to num_gpus chunks and each chunk will go to one of the gpus. Is it randomly sampled or sequentially?
st178364
First, it checks if the dataset size is divisible by num_replicas. If not, extra samples are added. If shuffle is turned on, it performs random permutation before subsampling. You should use set_epoch function to modify the random seed for that. Then the DistributedSampler simply subsamples the data among the whole dataset. https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py#L68 737 # subsample indices = indices[self.rank:self.total_size:self.num_replicas] Note that adding extra data could cause at evaluation time due to the duplicated data. I personally use a custom sampler (DistributedEvalSampler 269) when testing my models.
st178365
Hello, I was trying to improve one of my multi-node distributed training examples (https://leimao.github.io/blog/PyTorch-Distributed-Training/ 37) by adding some torch.distributed.barrier so that I could do some multiprocess-unsafe actions, such as data download and folder creation. After adding the torch.distributed.barrier, the training could still be done on a single-node multi-GPU machine. However, it got halted on a multi-node multi-GPU machine. Can anyone suggest if it is a PyTorch bug or it is my problem? Thank you. Here is also the modified script that has torch.distributed.barrier: import torch from torch.utils.data.distributed import DistributedSampler from torch.utils.data import DataLoader import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms import argparse import os import random import numpy as np def set_random_seeds(random_seed=0): torch.manual_seed(random_seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False np.random.seed(random_seed) random.seed(random_seed) def evaluate(model, device, test_loader): model.eval() correct = 0 total = 0 with torch.no_grad(): for data in test_loader: images, labels = data[0].to(device), data[1].to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = correct / total return accuracy def main(): num_epochs_default = 100 batch_size_default = 256 # 1024 learning_rate_default = 0.1 random_seed_default = 0 model_dir_default = "saved_models" model_filename_default = "resnet_distributed.pth" # Each process runs on 1 GPU device specified by the local_rank argument. parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument("--local_rank", type=int, help="Local rank. Necessary for using the torch.distributed.launch utility.") parser.add_argument("--num_epochs", type=int, help="Number of training epochs.", default=num_epochs_default) parser.add_argument("--batch_size", type=int, help="Training batch size for one process.", default=batch_size_default) parser.add_argument("--learning_rate", type=float, help="Learning rate.", default=learning_rate_default) parser.add_argument("--random_seed", type=int, help="Random seed.", default=random_seed_default) parser.add_argument("--model_dir", type=str, help="Directory for saving models.", default=model_dir_default) parser.add_argument("--model_filename", type=str, help="Model filename.", default=model_filename_default) parser.add_argument("--resume", action="store_true", help="Resume training from saved checkpoint.") argv = parser.parse_args() local_rank = argv.local_rank num_epochs = argv.num_epochs batch_size = argv.batch_size learning_rate = argv.learning_rate random_seed = argv.random_seed model_dir = argv.model_dir model_filename = argv.model_filename resume = argv.resume # Initializes the distributed backend which will take care of sychronizing nodes/GPUs torch.distributed.init_process_group(backend="nccl") # torch.distributed.init_process_group(backend="gloo") if local_rank != 0: torch.distributed.barrier() # Create directories outside the PyTorch program # Only create directory in one process because it is not multiprocess safe if not os.path.exists(model_dir): os.makedirs(model_dir) # Prepare dataset and dataloader transform = transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) train_set = torchvision.datasets.CIFAR10(root="data", train=True, download=True, transform=transform) test_set = torchvision.datasets.CIFAR10(root="data", train=False, download=True, transform=transform) model_filepath = os.path.join(model_dir, model_filename) # We need to use seeds to make sure that the models initialized in different processes are the same set_random_seeds(random_seed=random_seed) # Encapsulate the model on the GPU assigned to the current process model = torchvision.models.resnet18(pretrained=False) if local_rank == 0: torch.distributed.barrier() device = torch.device("cuda:{}".format(local_rank)) model = model.to(device) ddp_model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank], output_device=local_rank) # We only save the model who uses device "cuda:0" # To resume, the device for the saved model would also be "cuda:0" if resume == True: map_location = {"cuda:0": "cuda:{}".format(local_rank)} ddp_model.load_state_dict(torch.load(model_filepath, map_location=map_location)) # Restricts data loading to a subset of the dataset exclusive to the current process train_sampler = DistributedSampler(dataset=train_set) train_loader = DataLoader(dataset=train_set, batch_size=batch_size, sampler=train_sampler, num_workers=8) # Test loader does not have to follow distributed sampling strategy test_loader = DataLoader(dataset=test_set, batch_size=128, shuffle=False, num_workers=8) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(ddp_model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=1e-5) # Loop over the dataset multiple times for epoch in range(num_epochs): print("Local Rank: {}, Epoch: {}, Training ...".format(local_rank, epoch)) # Save and evaluate model routinely if epoch % 10 == 0: if local_rank == 0: accuracy = evaluate(model=ddp_model, device=device, test_loader=test_loader) torch.save(ddp_model.state_dict(), model_filepath) print("-" * 75) print("Epoch: {}, Accuracy: {}".format(epoch, accuracy)) print("-" * 75) ddp_model.train() for data in train_loader: inputs, labels = data[0].to(device), data[1].to(device) optimizer.zero_grad() outputs = ddp_model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if __name__ == "__main__": main()
st178366
barrier() requires all processes in your process group to join, so this is incorrect: if local_rank == 0: torch.distributed.barrier() Remember, all collective APIs of torch.distributed(i.e. not include P2P API: send, recv, isend, irecv), requires all processes in your created process group, either the implicit global group or a sub group created by torch.distributed.new_group, to execute. Will this solve your problem? Please have a test and respond.
st178367
Thank you @iffiX for the insightful response. I am not sure if I fully understood, but I do have: if local_rank != 0: torch.distributed.barrier() earlier in the code. The purpose is to pause the execution of all the local ranks except for the first local rank to create directory and download dataset without conflicts. Once the first local rank completed the download and directory creation, the reset of local ranks could use the downloaded dataset and directory. In your opinion, how should I modify my code in particular? Thank you. Best, Lei