id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st178568 | Solved by mrshenli in post #2
If the last line is used to ensure all processes finish reading, why does not it directly follow ddp_model.load_state_dict ?
Good catch! The original reason for adding that barrier is to guard the file deletion below:
if rank == 0:
os.remove(CHECKPOINT_PATH)
But looking at it again… |
st178569 | If the last line is used to ensure all processes finish reading, why does not it directly follow ddp_model.load_state_dict ?
Good catch! The original reason for adding that barrier is to guard the file deletion below:
if rank == 0:
os.remove(CHECKPOINT_PATH)
But looking at it again, this barrier is not necessary. Because the backward() on the DDP model is also a synchronization point as it calls AllReduce internally. Let me remove that.
For each iteration, do we need to call dist.barrier() ?
No. Two common reasons of using a barrier are
to avoid AllReduce timeout caused by skewed workloads across DDP processes
code after barrier() on rank A depends on the completion of code before barrier() on node B.
If none of the bot is a concern in your use case, then barrier shouldn’t be necessary. |
st178570 | tengerye:
If I just want to save a model, I don’t need dist.barrier() , right?
Yep, that should be fine. If only rank 0 saves the model and that might take very long, you can set the timeout argument in init_process_group 3 to a larger value. The default is 30min. |
st178571 | In a GAN-based model, contains one generator model and three discriminator model, all the models are wrapped in torch.nn.parallel.DistributedDataParallel() with different argument process_group, the loss function contains two parts, like this:
d_total_loss = d_real_loss + d_fake_loss
and the backpropgation is : d_total_loss.backward()
when I run the program, the error is:
image2008×335 25.5 KB
But, when I run d_real_loss.backward() or d_fake_loss.backward(), the program could run normally.
What’s more, I have another problem that is when I use generator_model.train() in my program and run it, there will be an error:
image2018×507 29.5 KB
Could you give me some advice to solve these problems? |
st178572 | Solved by mrshenli in post #6
What should I do if I have two different Discriminator class?
As long as forward and backward on one DDP instance is called alternatively, it should work. So, there are at least two options:
Wrap the two Discriminators into one nn.Module, say CombinedDiscriminator, and then pass the CombinedDis… |
st178573 | Hey @lzkzls could you please share a minimal reproduce-able example code?
The first error picture does not seem to be a DDP error. Does the code run correctly without DDP? Looks like the autograd graph generating d_real_loss and d_fake_loss share some operators/parameters.
The second error picture seems to suggest the generator_model is a None object? It will be helpful to see a self-contained repro of this error. |
st178574 | When I use torch.nn.DataParallel(), the code run correctly.
Thank you so much!
Here is a minimal reproduce-able example:
github.com
lzkzls/DDPtest/blob/master/DDP_test.py 3
import torch, torchvision
import torch.nn as nn
import torch.distributed as dist
import torchvision.transforms as transforms
import torch.optim as optim
#input (1,28,28)
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.conv2 = nn.ModuleList()
self.conv2.append(nn.Sequential(nn.Conv2d(1, 16, 3, stride=2, padding=1),
nn.BatchNorm2d(16),
nn.LeakyReLU(negative_slope=0.2)
))
self.conv2.append(nn.Sequential(nn.Conv2d(16, 32, 3, stride=2, padding=1),
nn.BatchNorm2d(32),
nn.LeakyReLU(negative_slope=0.2)
This file has been truncated. show original
and I run this code with command: sudo CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m torch.distributed.launch --nproc_per_node=4 DDP_test.py.
note: When I remove the comment of line 77 and 78, there will be an error:
image1348×160 8.39 KB |
st178575 | Hey @lzkzls
The following code works for me. I found two errors:
The original code didn’t set local_rank correctly. It needs be read the local_rank argument instead of hardcoding to 0.
For DDP, you need to call forward and backward interleavingly, instead of two forward followed by one backward. This is fixed by letting the forward function of Discriminator taking both fake and real images.
import argparse
import torch, torchvision
import torch.nn as nn
import torch.distributed as dist
import torchvision.transforms as transforms
import torch.optim as optim
#input (1,28,28)
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.conv2 = nn.ModuleList()
self.conv2.append(nn.Sequential(nn.Conv2d(1, 16, 3, stride=2, padding=1),
nn.BatchNorm2d(16),
nn.LeakyReLU(negative_slope=0.2)
))
self.conv2.append(nn.Sequential(nn.Conv2d(16, 32, 3, stride=2, padding=1),
nn.BatchNorm2d(32),
nn.LeakyReLU(negative_slope=0.2)
))
self.conv2.append(nn.Sequential(nn.Conv2d(32, 64, 3, stride=2, padding=1),
nn.BatchNorm2d(64),
nn.LeakyReLU(negative_slope=0.2)
))
self.conv2.append(nn.Sequential(nn.Conv2d(64, 1, 3, stride=2),
nn.BatchNorm2d(1),
nn.LeakyReLU(negative_slope=0.2)
))
def forward(self, fake, real):
for conv_layer in self.conv2:
fake = conv_layer(fake)
real = conv_layer(real)
return fake.view(-1,1), real.view(-1, 1)
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.deconv2 = nn.ModuleList()
self.deconv2.append(nn.Sequential(nn.ConvTranspose2d(64, 32, kernel_size=3, stride=2,padding=1),
nn.BatchNorm2d(32),
nn.LeakyReLU()
))
self.deconv2.append(nn.Sequential(nn.ConvTranspose2d(32, 16, kernel_size=3, stride=2,padding=1),
nn.BatchNorm2d(16),
nn.LeakyReLU()
))
self.deconv2.append(nn.Sequential(nn.ConvTranspose2d(16, 1, kernel_size=3, stride=2,padding=1),
nn.BatchNorm2d(1),
nn.LeakyReLU()
))
def forward(self, x):
for layer in self.deconv2:
x = layer(x)
return x
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int, default=0)
parser.add_argument("--local_world_size", type=int, default=1)
args = parser.parse_args()
local_rank = args.local_rank
dist.init_process_group(backend='nccl', init_method='env://')
disciminator_model = Discriminator()
generator_model = Generator()
torch.cuda.set_device(local_rank)
disciminator_model.cuda(local_rank)
generator_model.cuda(local_rank)
pg1 = dist.new_group(range(dist.get_world_size()))
pg2 = dist.new_group(range(dist.get_world_size()))
disciminator_model = torch.nn.parallel.DistributedDataParallel(disciminator_model, device_ids=[local_rank],
output_device=local_rank, process_group=pg1)
generator_model = torch.nn.parallel.DistributedDataParallel(generator_model, device_ids=[local_rank],
output_device=local_rank, process_group=pg2)
# disciminator_model = disciminator_model.train()
# generator_model = generator_model.train()
g_optimizer = optim.Adam(params=generator_model.parameters(), lr=1e-4)
d_optimizer = optim.Adam(params=disciminator_model.parameters(), lr =1e-4)
bcelog_loss = nn.BCEWithLogitsLoss().cuda(local_rank)
train_dataset = torchvision.datasets.MNIST(root='../../data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
batch_size = 8
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=4,
pin_memory=True,
sampler=train_sampler)
for epoch in range(100):
for i, (images, _) in enumerate(train_loader):
images = images.cuda(local_rank, non_blocking=True)
real_tensor = torch.full((batch_size,1), 1, dtype=torch.float32).cuda(local_rank)
fake_tensor = torch.zeros((batch_size,1), dtype=torch.float32).cuda(local_rank)
noise_tensor = torch.rand((batch_size, 64, 4, 4))
gen_image = generator_model(noise_tensor)
d_fake, d_real = disciminator_model(gen_image, images)
#d_real = disciminator_model(images)
d_fake_loss = bcelog_loss(d_fake, fake_tensor)
d_real_loss = bcelog_loss(d_real, real_tensor)
d_total_loss = d_fake_loss + d_real_loss
g_optimizer.zero_grad()
d_optimizer.zero_grad()
d_total_loss.backward()
g_optimizer.step()
d_optimizer.step()
if i % 10 == 0:
print(f"processed {i} images")
print("current epoch: ", epoch) |
st178576 | Thank you! You are so great! What should I do if I have two different Discriminator class? |
st178577 | What should I do if I have two different Discriminator class?
As long as forward and backward on one DDP instance is called alternatively, it should work. So, there are at least two options:
Wrap the two Discriminators into one nn.Module, say CombinedDiscriminator, and then pass the CombinedDiscriminator to DDP ctor.
Create a dedicated DDP instance (with dedicated ProcessGroup instance) for each Discriminator. |
st178578 | I have NCCL 2.5 installed on the system, but torch.cuda.nccl.version() shows 2.4.8.
On a single machine with 2 gpus, it works fine. |
st178579 | Hey @nash, NCCL is packaged in PyTorch as a submodule. The current version if 2.4.8. If you would like to use your own version, you can set USE_SYSTEM_NCCL=1. |
st178580 | I am working on DistributedDataParallel, trying to speed up training process. However, after two epochs, the distributed did not perform as well as the normal.
the log of distributed verision:
Epoch [1/2], Step [100/150], Loss: 2.1133
Epoch [2/2], Step [100/150], Loss: 1.9204
Training complete in: 0:00:27.426653
Dev loss: 1.8674346208572388
the log of normal version
Epoch [1/2], Step [100/600], Loss: 2.1626
Epoch [1/2], Step [200/600], Loss: 1.9929
Epoch [1/2], Step [300/600], Loss: 1.9224
Epoch [1/2], Step [400/600], Loss: 1.7479
Epoch [1/2], Step [500/600], Loss: 1.6264
Epoch [1/2], Step [600/600], Loss: 1.5411
Epoch [2/2], Step [100/600], Loss: 1.4387
Epoch [2/2], Step [200/600], Loss: 1.3243
Epoch [2/2], Step [300/600], Loss: 1.2894
Epoch [2/2], Step [400/600], Loss: 1.1754
Epoch [2/2], Step [500/600], Loss: 1.1271
Epoch [2/2], Step [600/600], Loss: 1.1246
Training complete in: 0:00:53.779830
Dev loss: 1.1193695068359375
the source code
distributed version
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def train(gpu, args):
rank = args.nr * args.gpus + gpu
dist.init_process_group(backend='nccl', init_method='env://', world_size=args.world_size, rank=rank)
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 100
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,
num_replicas=args.world_size,
rank=rank)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
start = datetime.now()
total_step = len(train_loader)
for epoch in range(args.epochs):
train_sampler.set_epoch(epoch)
for i, (images, labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, args.epochs, i + 1, total_step,
loss.item()))
if gpu == 0:
print("Training complete in: " + str(datetime.now() - start))
dev_dataset = torchvision.datasets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor(),
download=False)
dev_loader = torch.utils.data.DataLoader(dataset=dev_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0,
pin_memory=True)
_ = model.eval()
with torch.no_grad():
y_hat = []
y = []
for i, (images, labels) in enumerate(dev_loader):
y.append(labels.cuda(non_blocking=True))
y_hat.append(model(images.cuda(non_blocking=True)))
y_hat = torch.cat(y_hat)
y = torch.cat(y)
loss = criterion(y_hat, y)
print(f'Dev loss: {loss.item()}')
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=1, type=int, metavar='N')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
###################################################
args.world_size = args.gpus * args.nodes #
os.environ['MASTER_ADDR'] = HOST #
os.environ['MASTER_PORT'] = PORT #
mp.spawn(train, nprocs=args.gpus, args=(args, )) #
###################################################
if __name__ == '__main__':
"""
Epoch [1/2], Step [100/150], Loss: 2.1133
Epoch [2/2], Step [100/150], Loss: 1.9204
Training complete in: 0:00:27.426653
Dev loss: 1.8674346208572388
"""
main()
the single gpu version
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def train(gpu, args):
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 100
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0,
pin_memory=True)
start = datetime.now()
total_step = len(train_loader)
for epoch in range(args.epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(
epoch + 1,
args.epochs,
i + 1,
total_step,
loss.item())
)
if gpu == 0:
print("Training complete in: " + str(datetime.now() - start))
dev_dataset = torchvision.datasets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor(),
download=False)
dev_loader = torch.utils.data.DataLoader(dataset=dev_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0,
pin_memory=True)
_ = model.eval()
with torch.no_grad():
y_hat = []
y = []
for i, (images, labels) in enumerate(dev_loader):
y.append(labels.cuda(non_blocking=True))
y_hat.append(model(images.cuda(non_blocking=True)))
y_hat = torch.cat(y_hat)
y = torch.cat(y)
loss = criterion(y_hat, y)
print(f'Dev loss: {loss.item()}')
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=1, type=int, metavar='N')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
train(0, args)
if __name__ == '__main__':
"""
Epoch [1/2], Step [100/600], Loss: 2.1626
Epoch [1/2], Step [200/600], Loss: 1.9929
Epoch [1/2], Step [300/600], Loss: 1.9224
Epoch [1/2], Step [400/600], Loss: 1.7479
Epoch [1/2], Step [500/600], Loss: 1.6264
Epoch [1/2], Step [600/600], Loss: 1.5411
Epoch [2/2], Step [100/600], Loss: 1.4387
Epoch [2/2], Step [200/600], Loss: 1.3243
Epoch [2/2], Step [300/600], Loss: 1.2894
Epoch [2/2], Step [400/600], Loss: 1.1754
Epoch [2/2], Step [500/600], Loss: 1.1271
Epoch [2/2], Step [600/600], Loss: 1.1246
Training complete in: 0:00:53.779830
Dev loss: 1.1193695068359375
"""
main()
any help is appreciated |
st178581 | ops, I just found it’s an issue about learning rate.
When I set learning rate into 4e-4, four times of the rate using on single GPU.
Epoch [1/2], Step [100/150], Loss: 1.7276
Epoch [2/2], Step [100/150], Loss: 1.2062
Training complete in: 0:00:18.275619
Dev loss: 1.1129298210144043 |
st178582 | Right, learning rate, batch size, loss function can all play a role here. For example, if using the same per-process batch size, then the DDP gang collectively will consume more samples with larger world size, and hence the learning rate will need to adjust accordingly. |
st178583 | If my model original batch-size is 32, when I use two gpus ,one node per gpu, I use batch-size 16(32/ngpu), but if the number of gpus is 3 or any odd number, we should keep the size like 32/3=10 or limit the number of gpus to 2?
Any help is welcome. |
st178584 | Solved by 111344 in post #4
I think there is no difference between gpu=2 or 3.
In my experiment:
batch-size=8 gpu=2 -->batch_size=4 for single gpu
batch-size=8 gpu=3 -->batch_size=2 for single gpu(so total batch_size is 6)
batch-size=8 or 6, under normal circumstances, it does not have much impact on performance
For s… |
st178585 | Hey @111344, if you are looking for mathematical equivalence, you will need at least two things:
each DDP process processes batch_size / num_gpu samples: this allows DDP collectively process the same amount of inputs as local training.
loss_fn(model([sample1, sample2])) == (loss_fn(model([sample1])) + loss_fn(model[sample2])) /2: this is because DDP uses AllReduce to compute the average gradients across all processes. If the above condition is not met, then average gradients across all DDP processes are not equivalent the the local training gradients.
However, practically, applications usually do not have to satisfy the above conditions. Did you see any training accuracy degradation when scale up to 3 GPUs? |
st178586 | I haven’t tested GPU = 3, but I will take the time to verify it and then reply to you |
st178587 | I think there is no difference between gpu=2 or 3.
In my experiment:
batch-size=8 gpu=2 -->batch_size=4 for single gpu
batch-size=8 gpu=3 -->batch_size=2 for single gpu(so total batch_size is 6)
batch-size=8 or 6, under normal circumstances, it does not have much impact on performance
For some task which are very sensitive to batch_size may need to take it into account |
st178588 | I came across some way to change the GPU for component modules of a larger model in the following link:
https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html 1
However, my model runs in a distributed environment (ddp) and have several sub-modules holding all of which for a forward-backward cycle each of which will decrease the size of batch I can fit for an update. But for each forward/backward, not all of these are active and therefore are not required to stay in GPU (as in I can load these for a single-batch) and otherwise leave it in the non-gpu memory.
In other words, I have parameters Theta + theta[t] (for t=1…T), where t is a particular task. I want to only load a single theta[t] for a forward and backward pass into the GPU and fit larger batches. Currently I’m holding all theta[t] in the GPU.
Is it possible to use the same semantics if it’s the same (sub)-module (theta[t]) to achieve the intention described above? |
st178589 | Hey @jerinphilip, I believe this is possible. You can use Tensor.to(device) to move the parameters to the GPUs in the forward pass, and the to (i.e., copy) operator should be added into the autograd graph, so that the backward pass will compute gradients for the original on-CPU parameters properly. Let me know if it didn’t work.
Note that, although this can reduce the footprint on GPU memory, DDP would still need to communicate the same amount of parameters, as that is determined at DDP construction time. And as those parameters are on CPU, you won’t be able to use NCCL which might cause considerable slow down. |
st178590 | Hey @jerinphilip, this page briefly describes DDP: https://pytorch.org/docs/master/notes/ddp.html 5
We have a paper with more comprehensive details. Let me upload that to archive. |
st178591 | mrshenli:
Note that, although this can reduce the footprint on GPU memory, DDP would still need to communicate the same amount of parameters, as that is determined at DDP construction time. And as those parameters are on CPU, you won’t be able to use NCCL which might cause considerable slow down.
Where do I obtain details corresponding to this particular information? Isn’t only .grad meant to be communicated and the workers applying the updates individually? If my parameters of theta[t] has only gradients for the particular task, would this help the case? I’m reading the Forward Pass section of Internal Design 1, with find_unused_parameters, it is possible to operate on a subgraph, correct(?). I already have this enabled. |
st178592 | Where do I obtain details corresponding to this particular information?
We need to go through some internal approval process to publicly disclose that paper. It will take some time. For now https://pytorch.org/docs/master/notes/ddp.html is the best place for overall intro. The implementation of DDP is linked below:
github.com
pytorch/pytorch/blob/ebd869153c6adb37507d2ecb6a9fe3fd495fbb6e/torch/nn/parallel/distributed.py 1
from contextlib import contextmanager
import copy
import itertools
import torch
import torch.cuda.comm
import torch.distributed as dist
if dist.is_available():
from torch.distributed.distributed_c10d import _get_default_group
from ..modules import Module
from .replicate import replicate
from .scatter_gather import scatter_kwargs, gather
from .parallel_apply import parallel_apply
from torch.cuda._utils import _get_device_index
def _find_tensors(obj):
This file has been truncated. show original
github.com
pytorch/pytorch/blob/ebd869153c6adb37507d2ecb6a9fe3fd495fbb6e/torch/csrc/distributed/c10d/reducer.cpp 1
#include <torch/csrc/distributed/c10d/reducer.h>
#include <functional>
#include <c10/core/DeviceGuard.h>
#include <c10/util/Exception.h>
#include <torch/csrc/autograd/engine.h>
#include <torch/csrc/autograd/function_hook.h>
#include <torch/csrc/autograd/functions/accumulate_grad.h>
#include <torch/csrc/autograd/profiler.h>
#include <torch/csrc/autograd/utils/lambda_post_hook.h>
#include <torch/csrc/distributed/c10d/comm.h>
#include <torch/csrc/utils/hash.h>
#include <torch/csrc/utils/memory.h>
namespace c10d {
namespace {
inline int64_t current_time_in_nanos() {
return torch::autograd::profiler::getTime();
This file has been truncated. show original
Isn’t only .grad meant to be communicated and the workers applying the updates individually?
No. Currently at construction time, DDP creates a mapping from parameters to buckets, and always communicate all buckets even if some gradients are not used in one iteration. The reason for doing so is that it is possible process 1 only computes grad A and process 2 only computes grad B. However, AllReduce operation requires all processes to provide the same set of input tensors. So in this case, both process 1 and 2 need to communicate grad A and B. DDP can use another communication to first figure out which grads are used globally. However, if block waiting for this signal, there will be no overlap between communication and computation, which could result in >30% slowdown in some cases.
If my parameters of theta[t] has only gradients for the particular task, would this help the case?
It helps to skip computation but not communication. DDP always communicates all parameters in the model you passed to DDP constructor.
I’m reading the Forward Pass section of Internal Design , with find_unused_parameters , it is possible to operate on a subgraph, correct(?)
That flag only allows DDP to skip waiting for grads of those parameters. The communication phase is the same regardless the value of find_unused_parameters. |
st178593 | arxiv.org
2006.15704.pdf 4
1161.94 KB
This the relevant paper? |
st178594 | Hello there,
so I’ve been following the imagenet example https://github.com/pytorch/examples/blob/e9e76722dad4f4569651a8d67ca1d10607db58f9/imagenet/main.py 6) on how to use multiprocessing and I have a question on loss/ statistics reporting.
Currently, to my understanding, the way the example is being structured, every GPU on every Node gets one instance of the entire model in memory by spawning a separate process and executing the main_worker() method.
The imagenet example uses some custom classes like AverageMeter and ProgressMeter to report progress during the training/ validation. However from what I can tell each process will have its own meters/ progress to report.
Thus if I were to execute the example I would get a progress report for each of the processes running.
Instead of getting multiple progress reports and using console logging, I would like to use SummaryWriter and Tensorboard to monitor the progress of my training.
Going through documentation (https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#save-and-load-checkpoints 1) I came across a section on loading checkpoints that states:
When using DDP, one optimization is to save the model in only one process and then load it to all processes, reducing write overhead. This is correct because all processes start from the same parameters and gradients are synchronized in backward passes, and hence optimizers should keep setting parameters to the same values.
That got me thinking, that perhaps I could create a SummaryWriter only on the process with rank == 0 and use that writer to only report statistics to tensorboard. I’ve implemented it as such and it seems to be working.
However I’d like to ask whether my approach is correct or not. Is there a better/ different way to do this? |
st178595 | Hey @mobius, I believe this is correct. With DDP, every backward() pass is a global synchronization point. So all processes will run the same number of iterations and hence should have the same number of progress steps. Therefore, reporting the progress in one rank should be sufficient. (more details link1 12, link2 9) |
st178596 | Thank you for the quick response @mrshenli! Great job on the distributed/ multiprocessing part of PyTorch, its really intuitive. Also your tutorials have been super helpful! I was not aware of the paper you’ve mentioned, I will definitely check it out! |
st178597 | Given the pseudo model below:
class model(nn.Module):
def __init__(self):
self.alpha_params1 = nn.Parameters(<size>, requires_grad=True)
self.alpha_params2 = nn.Parameters(<size>, requires_grad=True)
< typical Conv2d layers>....
def forward(self, x):
<feed forward>
return output
def net_parameters(self, filtered_name='alpha_', recurse=True):
# return torch layer params
for name, param in self.named_parameters(recurse=recurse):
if filtered_name not in name:
yield param
def extra_params(self):
# return user defined params
return [self.alpha_params1, self.alpha_params2]
So above is my pseudo model code.
net = model() # instantiate model above
optimizer_1 = Adam(model.net_parameters, lr=0.001,...)
optimizer_2 = Adam(model.extra_params, lr=0.003,...)
criterion = L1()
##
# typical Apex Distributed Data Parrallel initialization
##
for epoch in epochs:
for data1, data2 in dataloader:
output = net(data1.data)
loss = criterion(output, data1.gt)
loss.backward()
optimizer_1.zero_grad()
optimizer_1.step()
output2 = net(data2.data)
loss2 = criterion(output2, data2.gt)
loss2.backward()
optimizer_2.zero_grad()
optimizer_2.step()
# save checkpoint
torch.save(net.module.state_dict(), f"state_dict_{epoch}.pth")
torch.save(net.module.extra_params(), f"extra_params_{epoch}.pth")
Above is my pseudo code for model instantiation and training.
At every 10 epoch intervals, I checkpoint by saving model.state_dict() as well as model’s alpha parameters separately. I then compare the value of my alpha parameters between different epochs. What I found is that both parameters from separate epochs are identical in values as well the model’s weights. It seems no update is happening. Any help is appreciated. |
st178598 | Solved by mrshenli in post #2
I believe you need to call zero_grad() either before backward() or after step(). Otherwise, the grad is always 0 when optimizer step tries to use it. |
st178599 | Scott_Hoang:
loss2.backward()
optimizer_2.zero_grad()
optimizer_2.step()
I believe you need to call zero_grad() either before backward() or after step(). Otherwise, the grad is always 0 when optimizer step tries to use it. |
st178600 | I would like to gather some intermediate output feature across different GPUs, somewhat like SyncBN, but it prompts out an error as below. To reproduce this problem, I have built a toy model in Github, just a few lines of codes. Any help is appreciated. Thanks a lot.
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). (prepare_for_backward at /opt/conda/conda-bld/pytorch_1579027003190/work/torch/csrc/distributed/c10d/reducer.cpp:514)
Toy model to reproduce the error:
import os
import argparse
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.parallel import DistributedDataParallel
import torch.optim as optim
import torch.multiprocessing as mp
import comm
parser = argparse.ArgumentParser(description='Distributed Data Parallel')
parser.add_argument('--world-size', type=int, default=2,
help='Number of GPU(s).')
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.stem = nn.Linear(10, 10)
self.branch1 = nn.Sequential(
nn.Linear(10, 10),
nn.ReLU(),
nn.Linear(10, 10))
self.branch2 = nn.Sequential(
nn.Linear(10, 10),
nn.ReLU(),
nn.Linear(10, 10))
def forward(self, x):
x1 = F.relu(self.stem(x)) # [20, 10]
branch1 = self.branch1(x1[:10])
branch2 = self.branch2(x1[10:])
branch1_list = [torch.empty_like(branch1, device='cuda') for _ in range(dist.get_world_size())]
dist.all_gather(branch1_list, branch1)
# branch1_list = comm.all_gather(branch1)
pred_weight = torch.cat(branch1_list, dim=0).mean(0, keepdim=True).expand(5, -1) # [5, 10]
out = branch2.mm(pred_weight.t())
return out
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to('cuda')
ddp_model = DistributedDataParallel(model, device_ids=[dist.get_rank()], broadcast_buffers=False)
ddp_model.train()
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
for _ in range(5):
optimizer.zero_grad()
inputs = torch.randn((20, 10), device='cuda')
outputs = ddp_model(inputs)
labels = torch.randn_like(outputs).to('cuda')
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("NCCL", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
def cleanup():
dist.destroy_process_group()
if __name__ == "__main__":
args = parser.parse_args()
run_demo(demo_basic, args.world_size) |
st178601 | I believe this is the same issue discussed here: https://github.com/pytorch/pytorch/issues/40690 167 |
st178602 | When I execute the code example of mnist_hogwild, I find that multiple processes are running parallelly on one gpu.
Question: Can multiple processes be executed in parallel on multiple GPUs? |
st178603 | Yes, that should be possible. Just move/copy the model to different devices. Did you encounter any error when doing that? |
st178604 | Hi,
I have a question here.
I derive the IterableDataset so it can yield my data. In particular, my input data is a list that I care for the order because later I need to make sure that my output still follows the same order as input. Although more complicated than this, essentially it can be thought as [1,2,3,4…]
Then I use Dataloader with multiple workers like the following
loader = DataLoader(iterable_dataset, batch_size=256, num_workers=2, worker_init_fn=worker_init_fn)
for batch in loader:
print(batch)
However, I find the batch is strictly following the original order of the iterable [1,2,3,4]. Even I on purpose delay the first worker which is outputting 1 and 2, and the second worker for 3 and 4, this for loop is still yielding data in the original order, i.e., 1, 2, 3, 4. That makes me believe that the async DataLoader here, although they are processing data in parallel (I timing the worker process right before each yield, and I am pretty sure both workers start to work separately at time 0), they will communicate to always keep the original order of the dataset. For example, if 2,3,4 are all ready at the earlier time but they will be blocked and wait for 1 to finish and finally only allow the yield order to be 1,2,3,4
Is this an expected behavior? It looks true to me and I feel it is sub-optimal as it is not a real queue. If this is the case, I am okay but I am curious which line of source code takes care of that, if it is not, could you please hint if I have some misunderstanding or code error so it looks like this?
Thank you so much! |
st178605 | I wonder if it is valid to manually edit a tensor’s grad of a DDP model before syncing the gradient. This is what I am trying to do:
1. base_model = MyModel().cuda(device_ids[0])
2. ddp_model = DDP(base_model, device_ids)
3. outputs = ddp_model(input)
4. loss1, loss2 = loss_fn(outputs)
5. with ddp_model.no_sync():
6. local_loss = loss1 + 0. * loss2
7. local_loss.backward(retain_graph=True)
8. for p in base_model.sub_model2.parameters():
9. p.grad *= 0.
10. for p in base_model.sub_model3.parameters():
11. p.grad *= 0.
12. local_loss = 0. * loss1 + loss2
13. local_loss.backward()
14. optimizer.step()
As shown in the code snippet above, I manually modify the base_model’s gradient at lines 8 - 11 before syncing the gradient at line 13. My goal is to use loss1 to only update sub_model1 in the base_model, and use loss2 to update the whole base_model.
The code runs without error, but I am concerned if this manual modification of tensor gradient will cause any issue to the gradient sync mechanism in DDP. |
st178606 | Hey @albert.cwkuo
With the above code, I think DDP still syncs all grads for both loss1 and loss2, because the flag controlled by no_sync ctx manager is used when calling DistributedDataParallel.forward(). So, as the forward is out of the no_sync context, DDP would still prepare to sync all grads during the backward pass.
github.com
pytorch/pytorch/blob/5036c94a6e868963e0354fc04c92e204d8d77677/torch/nn/parallel/distributed.py#L477-L498 8
@contextmanager
def no_sync(self):
r"""
A context manager to disable gradient synchronizations across DDP
processes. Within this context, gradients will be accumulated on module
variables, which will later be synchronized in the first
forward-backward pass exiting the context.
Example::
>>> ddp = torch.nn.DistributedDataParallel(model, pg)
>>> with ddp.no_sync():
... for input in inputs:
... ddp(input).backward() # no synchronization, accumulate grads
... ddp(another_input).backward() # synchronize grads
"""
old_require_backward_grad_sync = self.require_backward_grad_sync
self.require_backward_grad_sync = False
try:
yield
This file has been truncated. show original
How is MyModel implemented. Does it contain two independent submodules sub_model1 and sub_model2 and do sth like the following?
class MyModel(nn.Module):
def __init__(self):
self.sub_model1 = SomeModel1()
self.sub_model2 = SomeModel2()
def forward(self, input):
return self.sub_model1(input), self.sub_model2(input) |
st178607 | Thanks @mrshenli for your reply. This is what MyModel do internally.
class MyModel(nn.Module):
def __init__(self):
self.sub_model1 = SomeModel1()
self.sub_model2 = SomeModel2()
def forward(self, input):
out1 = self.sub_model2(input)
out2 = self.sub_model1(out1)
return out1, out2
I want the gradient to be synced eventually when I call backward() at line 11, so the above code seems correct? My goal is to accumulate gradient from both loss1 and loss2 to sub_model2 and accumulate gradient only from loss2 to sub_model1. That’s why I try to zero out grad in sub_model1.
Note that I use local_loss = loss1 + 0.0 * loss2 and local_loss = 0.0 * loss1 + loss2 to mask out part of the loss before calling local_loss.backward().
----------------update------------
class MyModel(nn.Module):
def __init__(self):
self.sub_model1 = SomeModel1()
self.sub_model2 = SomeModel2()
self.sub_model3 = SomeModel3()
def forward(self, input):
out1 = self.sub_model2(self.sub_model1(input))
out2 = self.sub_model3(out1)
return out1, out2
My goal is to accumulate gradient from both loss1 and loss2 to sub_model1 and accumulate gradient only from loss2 to sub_model2 and sub_model3. That’s why I try to zero out grad in sub_model2 and sub_model3. |
st178608 | albert.cwkuo:
My goal is to accumulate gradient from both loss1 and loss2 to sub_model2 and accumulate gradient only from loss2 to sub_model1.
In that case, calling backward once on loss1+loss2 might be sufficient? Is the following result what you want?
import torch
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super().__init__()
with torch.no_grad():
self.net1 = nn.Linear(1, 1)
self.net1.weight.copy_(torch.ones(1, 1))
self.net1.bias.copy_(torch.zeros(1))
self.net2 = nn.Linear(1, 1)
self.net2.weight.copy_(torch.ones(1, 1))
self.net2.bias.copy_(torch.zeros(1))
def forward(self, x):
out1 = self.net1(x)
out2 = self.net2(out1)
return out1 + out2
print("==============")
model = MyModel()
model(torch.ones(1, 1)).sum().backward()
print("net1 grad is: ", model.net1.weight.grad)
print("net2 grad is: ", model.net2.weight.grad)
print("==============")
model = MyModel()
model.net1(torch.ones(1, 1)).sum().backward()
print("net1 grad is: ", model.net1.weight.grad)
print("net1 grad is: ", model.net2.weight.grad)
print("==============")
model = MyModel()
model.net2(torch.ones(1, 1)).sum().backward()
print("net1 grad is: ", model.net1.weight.grad)
print("net2 grad is: ", model.net2.weight.grad)
outputs are
==============
net1 grad is: tensor([[2.]])
net2 grad is: tensor([[1.]])
==============
net1 grad is: tensor([[1.]])
net2 grad is: None
==============
net1 grad is: None
net2 grad is: tensor([[1.]]) |
st178609 | Okay that makes sense, but let me clarify how MyModel works internally so that the proposed solution may not work in the case. The reply has been updated.
In short, I have 3 subnets inside MyModel. One of the loss depends on sub_model1 and sub_model2, but I only want to update sub_model1 with this loss. Therefore I need to zero out the grad in sub_model2 when calling backward on that loss. |
st178610 | albert.cwkuo:
My goal is to accumulate gradient from both loss1 and loss2 to sub_model1 and accumulate gradient only from loss2 to sub_model2 and sub_model3.
With the above statement, should the forward function be something like below?
def forward(self, input):
out1 = self.sub_model1(input)
out2 = self.sub_model3(self.sub_model2(out1))
return out1, out2
With this code, out1.sum().backward() will only compute grads for sub_model1, and out2.sum().backward() will compute grads for all sub-models. And (out1 + out2).sum().backward() should meet the cited statement above. |
st178611 | In my use case it’s
def forward(self, input):
out1 = self.sub_model2(self.sub_model1(input))
out2 = self.sub_model3(out1)
return out1, out2
That’s the tricky part. That’s why I want to zero out the gradient of sub_mode2 after calling backward on loss1 (loss1 is computed on out1). |
st178612 | I am training a NAS network that would sometimes be out of memory on the GPUs due to different predicted model configurations on each run.
def check_oom(func):
def wrapper(*args, **kwargs):
try:
return func(*arg, **kwargs)
except RuntimeError:
torch.cuda.empty_cache()
return None
return wrapper
# model's forward function
@check_oom
def forward(model, input):
return model(input)
# main loop
def main():
dataset = Dataloader(...)
net = Model(...)
for input in dataset:
output = forward(net , input)
(....... training code)
torch.cuda.synchronize()
above is the pseudo-code I use for my training. However, in practice, OOM events will hang the entire training with GPUs 's maxed out at 100% utility rate.
What can I do ? |
st178613 | Solved by mrshenli in post #2
Hey @Scott_Hoang, yes, this is the expected behavior, as OOM in one of the process will lead to AllReduce communication desync across the entire DDP gang. https://pytorch.org/elastic is built to solve this problem. It will destruct all DDP instances across all processes, reconstruct a new gang, an… |
st178614 | Hey @Scott_Hoang, yes, this is the expected behavior, as OOM in one of the process will lead to AllReduce communication desync across the entire DDP gang. https://pytorch.org/elastic 24 is built to solve this problem. It will destruct all DDP instances across all processes, reconstruct a new gang, and then recover from the previous checkpoint.
cc @Kiuk_Chung |
st178615 | Hi,
I have a question about the architecture of distributed PyTorch!
When I run some examples, I saw that we can send and receive directly from worker A to worker B.
Why do we need MASTER_PORT and MASTER_ADDRESS?
For port, we can understand that they need this number to recognize other workers which belong to the same program or not. However, I do not understand why we need master_add?
if it is a Master-Slave model, I think that is no problem, and Master worker will manage all works.
Thanks, |
st178616 | Hey @ph0123
The reason is that when we implement torch.distributed.rpc, we would like to abstract out the comm layer and reuse whatever is available in torch.disributed. At that time, ProcessGroup is the only option we have, which requires a rendezvous during initialization. The master port and address is needed for that rendezvous. Subsequent communications do not go through the master address.
As of v1.6.0, we added a new P2P comm backend implementation, https://pytorch.org/docs/master/rpc.html#tensorpipe-backend 7. And we do plan to remove the requirement for MASTER_PORT/MASTER_ADDRESS. |
st178617 | Hi,
These days I’ve accelerated the training of models with DistributedDataParallel. NCCL is used as the backend of torch.distributed. Currently, I try to do validation with a list of strings stored in the memory. However, with the multi-process mechanism, it’s hard to share the list across different ranks than in DP mode. Is there any good way to solve the problem? |
st178618 | There was an PR to provide such a feature for general Python objects, but not landed yet. You can copy that code for now.
github.com/pytorch/pytorch
[distributed] implement all_gather for arbitrary python objects
pytorch:gh/rohan-varma/25/base ← pytorch:gh/rohan-varma/25/head
opened
Oct 28, 2019
rohan-varma
+93
-0 |
st178619 | Hello,
I am trying to use then method that is explained here 1 along with rpc_async. Yet, I get this error:
AttributeError: ‘torch.distributed.rpc.Future’ object has no attribute ‘then’
I tried just importing torch.futures.Future and I got also an error:
ModuleNotFoundError: No module named ‘torch.futures’
Yet, importing torch.distributed.rpc.Future works. But, the imported class does not have a constructor, where running inspect.getmembers(Future, predicate=inspect.isfunction) gives an empty array.
I have the latest torch version (1.5.1).
Would you please help me with this issue?
Thanks,
Arsany |
st178620 | Hey @aguirguis, the torch.futures package is introduced as an experimental feature in v1.6. If you use the nightly binary or compile from master, the then API should be there.
Sorry, we should have mentioned that it is only available for v1.6+ |
st178621 | I’m using DDP to train Neural Architecture Search networks which contained a controller and a model network. During training, my controller predictss a model’s architecture that maximize reward. the call looks like this.
# both model and controller are torch.nn.DistributedDataParallel
arch = controller.forward(conditions)
model.module.set_arch(arch) # modified model internal architecture.
output = model.forward(input)...
However, in DDP docs I noticed the following:
… warning::
You should never try to change your model’s parameters after wrapping
up your model with DistributedDataParallel. In other words, when
wrapping up your model with DistributedDataParallel, the constructor of
DistributedDataParallel will register the additional gradient
reduction functions on all the parameters of the model itself at the
time of construction. If you change the model’s parameters after
the DistributedDataParallel construction, this is not supported and
unexpected behaviors can happen, since some parameters’ gradient
reduction functions might not get called.
So I’m just wondering what is the correct way to do this? or if NAS is not suitable with DDP. |
st178622 | Solved by mrshenli in post #9
IIUC, that will still remove DDP autograd hooks on self._arch.
Question, do you need the backward pass to compute the gradients for self._arch? If not, you can explicitly setting self._arch.requires_grad = False before passing the model to DDP ctor to tell DDP to ignore self._arch. Then, the above … |
st178623 | model.module.set_arch(arch) # modified model internal architecture.
By doing the above, are you removing parameters from the model or adding new parameters into the model? If yes, then it won’t work with DDP, as DDP creates communication buckets at construction time using the parameters returned by model.parameters() field. Hence, if the model.parameters() returns a different set of parameters, DDP won’t adapt to it.
To make it work, you can create a new DDP instance using the modified model whenever the model gets updated. But all DDP processes need to do the same at the same time using the same model.
If it just changes the value of those parameters, it should be fine. |
st178624 | can you clarify the different between modifying and replacing?
def __init__(self):
self._arch = torch.variable(<shape> , required_grad=True)
def set_arch(self, arch):
self._arch = arch # is this modifying or replacing? |
st178625 | I believe this is replacing. You can use self._arch.copy_(arch) to override the value. See the code below.
import torch
x = torch.zeros(2, 2)
y = torch.ones(2, 2)
print("x storage: ", x.data_ptr())
print("y storage: ", y.data_ptr())
x = y
print("x storage: ", x.data_ptr())
z = torch.zeros(2, 2) + 2
print("z storage: ", z.data_ptr())
x.copy_(z)
print("x storage: ", x.data_ptr())
print(x)
outputs are:
x storage: 94191491020800 y storage: 94191523992320
x storage: 94191523992320
z storage: 94191523994816
x storage: 94191523992320
tensor([[2., 2.],
[2., 2.]]) |
st178626 | This might be it. if DDP wrapper kept a ptr to my arch settings, then it will not see the new value since it with a different pointer.
So does that mean that DDP.module params is a stale copy of our model?? |
st178627 | So does that mean that DDP.module params is a stale copy of our model??
I believe so. As DDP remembers the variables at construction time:
github.com
pytorch/pytorch/blob/09285070a70d146b158db1e1e44b2c031a5c70b0/torch/csrc/distributed/c10d/reducer.cpp#L32 1
}
} // namespace
Reducer::Reducer(
std::vector<std::vector<torch::autograd::Variable>> replicas,
std::vector<std::vector<size_t>> bucket_indices,
std::shared_ptr<c10d::ProcessGroup> process_group,
std::vector<std::vector<bool>> expect_sparse_gradients,
int64_t bucket_bytes_cap)
: replicas_(std::move(replicas)),
process_group_(std::move(process_group)),
expect_sparse_gradients_(std::move(expect_sparse_gradients)),
expect_autograd_hooks_(false),
require_finalize_(false),
next_bucket_(0),
has_marked_unused_parameters_(false),
local_used_maps_reduced_(false),
backward_stats_base_(0),
has_rebuilt_bucket_(false),
bucket_bytes_cap_(bucket_bytes_cap) {
And there might be more than that. DDP might not be able to read that value at all. Because DDP registers a backward hook on each parameter, and relying on that hook to notify DDP when and what to read. Those hooks are installed at DDP construction time as well. If you create a new variable and assign it to self._arch, that hook might be lost.
cc @albanD is the above statement on variable hook correct? |
st178628 | Hi,
Yes I think this note explicitly warns you against doing this. You should not change the Parameters.
As a side note, you should never call the forward() of your module directly but call module(input). |
st178629 | what if i modified my forward function such that
> forward(input, arch):
> self._arch = arch
will this works?
Also does DDP keeps DDP.module value up-to-date? |
st178630 | IIUC, that will still remove DDP autograd hooks on self._arch.
Question, do you need the backward pass to compute the gradients for self._arch? If not, you can explicitly setting self._arch.requires_grad = False before passing the model to DDP ctor to tell DDP to ignore self._arch. Then, the above assignment would work. |
st178631 | I have a model that fits perfectly fine on any one of my GPUs (4x1080Tis) but I had the bright idea that maybe I could speed up a forward pass (at inference time) by partitioning up one of the layers (a very “tall” Conv2d - i.e. >20 output channels) across all of my GPUs. So I used DDP to map across my GPUs and surprisingly (or maybe not?) forward pass actually gets slower with increasing number of GPUs. Is this to be expected?
I’m not an expert on the execution pipeline on GPUs but is it the case that any individual CUDA kernel (e.g. my “tall” Conv2d) gets executed in parallel? I’m guessing that that’s my issue - that the layer I’m partitioning up already gets executed in parallel and the scatter/gather just adds copy (and process instantiation) latencies. |
st178632 | makslevental:
maybe I could speed up a forward pass (at inference time) by partitioning up one of the layers (a very “tall” Conv2d - i.e. >20 output channels) across all of my GPUs. So I used DDP to map across my GPUs
Hey @makslevental could you please elaborate on how did you manage to use DDP after splitting one layer? Does it mean each DDP process now sees a different model? Or is it true that each DDP process no longer has exclusive access to its own GPU?
and surprisingly (or maybe not?) forward pass actually gets slower with increasing number of GPUs. Is this to be expected?
Since with the split, each forward pass will do cross-device communications, so it is possible to see slowdowns. |
st178633 | @mrshenli I split my network across the layer boundary so yes each DDP now sees a different model - imagine instead of a 48 output channel Conv2d applying 4x12 output channel Conv2ds.
mrshenli:
Since with the split, each forward pass will do cross-device communications, so it is possible to see slowdowns.
why is there cross-talk? I with with torch.no_grad() and for p in model.parameters(): p.requires_grad = False in every run.
. |
st178634 | makslevental:
I split my network across the layer boundary so yes each DDP now sees a different model
I see. Is this forward only for inference or do you also run backward for training? If it is the latter, this might kill the correctness of DDP, as DDP expects the model in each process to be exactly the same, otherwise, the AllReduce communication across DDP process could mess up the gradients.
why is there cross-talk? I with with torch.no_grad() and for p in model.parameters(): p.requires_grad = False in every run .
Looks like you are doing inference instead of training? In this case, don’t you need to somehow gather/combine the outputs from the four different Conv2d layers from 4 different DDP processes? Otherwise, how did you get the final inference result?
BTW, since this is inference only, why do you need DDP? |
st178635 | Sorry actually I just realized I’ve completely misspoken. I’m not wrapping my model in DDP. I was planning on doing this and then I realized it replicates across nodes where as I need to send distinct (but related) models to nodes.
mrshenli:
Looks like you are doing inference instead of training? In this case, don’t you need to somehow gather/combine the outputs from the four different Conv2d layers from 4 different DDP processes? Otherwise, how did you get the final inference result?
Yes this is correct, I do a map and then I plan on doing a concat to reconstruct the output channels as if they all came from the same Conv2d. I think you can basically get the idea from this code snippet
def run(rank, size):
with torch.no_grad():
image_pth = Path(os.path.dirname(os.path.realpath(__file__))) / Path(
"../simulation/screenshot.png"
)
screenshot = SimulPLIF(img_path=image_pth, num_repeats=1, load_truth=False)
img_height, img_width = screenshot[0].squeeze(0).numpy().shape
from nn_dog import PIN_MEMORY
train_dataloader = DataLoader(screenshot, batch_size=1, pin_memory=PIN_MEMORY)
dog = DifferenceOfGaussiansFFT(
img_height=img_height,
img_width=img_width,
sigma_bins=48 // size,
max_sigma=30,
).to(rank, non_blocking=PIN_MEMORY)
for p in dog.parameters():
p.requires_grad = False
dog.eval()
torch.cuda.synchronize(rank)
dogs = []
for i in range(10):
img_tensor = next(iter(train_dataloader))
img_tensor = img_tensor.to(rank)
torch.cuda.synchronize(rank)
dogs.append(dog(img_tensor))
return dogs
def init_process(rank_size_fn, backend="nccl"):
rank, size, fn = rank_size_fn
""" Initialize the distributed environment. """
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "29500"
dist.init_process_group(backend, rank=rank, world_size=size)
return fn(rank, size)
if __name__ == "__main__":
set_start_method("spawn")
size = 4
pool = Pool(processes=size)
start = time.monotonic()
res = pool.map(init_process, [(i, size, run) for i in range(size)])
end = time.monotonic()
print(end - start) |
st178636 | I see. How did you measure the latency of the forward pass? Did you use elapsed_time 2 from CUDA events to wrap dog(img_tensor)? |
st178637 | it’s a rough measure but it’s right there
start = time.monotonic()
res = pool.map(init_process, [(i, size, run) for i in range(size)])
end = time.monotonic()
print(end - start)
then since in run i repeat 10 times i reason that i’m amortizing process instantiation across those 10 inference passes. and so between 1 and 4 GPUs end-start just grows. |
st178638 | makslevental:
for i in range(10):
img_tensor = next(iter(train_dataloader))
img_tensor = img_tensor.to(rank)
torch.cuda.synchronize(rank)
dogs.append(dog(img_tensor))
Could you please wrap the above code with some time measure and check how much percentage does it contribute to the total delay of end-start?
BTW, you might be able to avoid the torch.cuda.synchronize(rank) above, as the copy (tensor.to) should have synchronize the destination device properly. |
st178639 | okay i removed the sync and enclosed the loop in time.monotonic.
2 gpus:
1.2319540430326015
1.3151386808604002
total: 6.296403981978074
3 gpus:
1.1622967889998108
1.3194731972180307
1.3116707119625062
total: 5.875259702792391
4 gpus:
1.1516663811635226
1.4554521720856428
1.76222850009799
1.8313195349182934
total: 6.504983321996406
i ran this several times it’s consistent. |
st178640 | The large variance (1.15 vs 1.83) seems to suggest there are some sort of contention. I wonder if that is caused by the data loading. What if we use CUDA event elapsed_time to measure dog(img_tensor)? Note that time. monotonic() does not guarantee to give the correct measurements as there could still be CUDA ops pending in stream.
Or is it possible to get a self-contained example that we can investigate locally? E.g., using torch.rand to create random inputs instead loading from screenshot.png |
st178641 | this is self-contained; except for the standard stuff (numpy, pytorch) you only need opt_einsum
import math
import numbers
import os
import time
from functools import partial
from typing import Tuple
import numpy as np
import torch
import torch.distributed as dist
from opt_einsum import contract
from torch import nn
from torch.multiprocessing import set_start_method, Pool
class DifferenceOfGaussiansFFT(nn.Module):
def __init__(
self,
*,
img_height: int,
img_width: int,
min_sigma: int = 1,
max_sigma: int = 10,
sigma_bins: int = 50,
truncate: float = 5.0,
):
super(DifferenceOfGaussiansFFT, self).__init__()
self.img_height = img_height
self.img_width = img_width
self.signal_ndim = 2
self.sigma_list = np.concatenate(
[
np.linspace(min_sigma, max_sigma, sigma_bins),
[max_sigma + (max_sigma - min_sigma) / (sigma_bins - 1)],
]
)
sigmas = torch.from_numpy(self.sigma_list)
self.register_buffer("sigmas", sigmas)
# print("gaussian pyramid sigmas: ", len(sigmas), sigmas)
# accommodate largest filter
self.max_radius = int(truncate * max(sigmas) + 0.5)
max_bandwidth = 2 * self.max_radius + 1
# pad fft to prevent aliasing
padded_height = img_height + max_bandwidth - 1
padded_width = img_width + max_bandwidth - 1
# round up to next power of 2 for cheaper fft.
self.fft_height = 2 ** math.ceil(math.log2(padded_height))
self.fft_width = 2 ** math.ceil(math.log2(padded_width))
self.pad_input = nn.ConstantPad2d(
(0, self.fft_width - img_width, 0, self.fft_height - img_height), 0
)
self.f_gaussian_pyramid = []
kernel_pad = nn.ConstantPad2d(
# left, right, top, bottom
(0, self.fft_width - max_bandwidth, 0, self.fft_height - max_bandwidth),
0,
)
for i, s in enumerate(sigmas):
radius = int(truncate * s + 0.5)
width = 2 * radius + 1
kernel = torch_gaussian_kernel(width=width, sigma=s.item())
# this is to align all of the kernels so that the eventual fft shifts a fixed amount
center_pad_size = self.max_radius - radius
if center_pad_size > 0:
centered_kernel = nn.ConstantPad2d(center_pad_size, 0)(kernel)
else:
centered_kernel = kernel
padded_kernel = kernel_pad(centered_kernel)
f_kernel = torch.rfft(
padded_kernel, signal_ndim=self.signal_ndim, onesided=True
)
self.f_gaussian_pyramid.append(f_kernel)
self.f_gaussian_pyramid = nn.Parameter(
torch.stack(self.f_gaussian_pyramid, dim=0), requires_grad=False
)
def forward(self, input: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
img_height, img_width = list(input.size())[-self.signal_ndim:]
assert (img_height, img_width) == (self.img_height, self.img_width)
padded_input = self.pad_input(input)
f_input = torch.rfft(padded_input, signal_ndim=self.signal_ndim, onesided=True)
f_gaussian_images = comp_mul(self.f_gaussian_pyramid, f_input)
gaussian_images = torch.irfft(
f_gaussian_images,
signal_ndim=self.signal_ndim,
onesided=True,
signal_sizes=padded_input.shape[1:],
)
# fft induces a shift so needs to be undone
gaussian_images = gaussian_images[
:, # batch dimension
:, # filter dimension
self.max_radius: self.img_height + self.max_radius,
self.max_radius: self.img_width + self.max_radius,
]
return gaussian_images
def torch_gaussian_kernel(
width: int = 21, sigma: int = 3, dim: int = 2
) -> torch.Tensor:
"""Gaussian kernel
Parameters
----------
width: bandwidth of the kernel
sigma: std of the kernel
dim: dimensions of the kernel (images -> 2)
Returns
-------
kernel : gaussian kernel
"""
if isinstance(width, numbers.Number):
width = [width] * dim
if isinstance(sigma, numbers.Number):
sigma = [sigma] * dim
kernel = 1
meshgrids = torch.meshgrid(
[torch.arange(size, dtype=torch.float32) for size in width]
)
for size, std, mgrid in zip(width, sigma, meshgrids):
mean = (size - 1) / 2
kernel *= (
1
/ (std * math.sqrt(2 * math.pi))
* torch.exp(-(((mgrid - mean) / std) ** 2) / 2)
)
# Make sure sum of values in gaussian kernel equals 1.
kernel = kernel / torch.sum(kernel)
return kernel
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i: i + n]
def comp_mul(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
"""Complex multiplies two complex 3d tensors
x = (x_real, x_im)
y = (y_real, y_im)
x*y = (x_real*y_real - x_im*y_im, x_real*y_im + x_im*y_real)
Last dimension is x2 with x[..., 0] real and x[..., 1] complex.
Dimensions (-3,-2) must be equal of both a and b must be the same.
Examples
________
>>> f_filters = torch.rand((20, 1024, 1024, 2))
>>> f_imgs = torch.rand((5, 1024, 1024, 2))
>>> f_filtered_imgs = comp_mul(f_filters, f_imgs)
Parameters
----------
x : Last dimension is (a,b) of a+ib
y : Last dimension is (a,b) of a+ib
Returns
-------
z : x*y
"""
# hadamard product of every filter against every batch image
op = partial(contract, "fuv,buv->bfuv")
assert x.shape[-1] == y.shape[-1] == 2
x_real, x_im = x.unbind(-1)
y_real, y_im = y.unbind(-1)
z = torch.stack(
[op(x_real, y_real) - op(x_im, y_im), op(x_real, y_im) + op(x_im, y_real)],
dim=-1,
)
return z
def run(rank, size):
with torch.no_grad():
img_tensor = torch.rand((1, 1, 1000, 1000))
dog = DifferenceOfGaussiansFFT(
img_height=1000,
img_width=1000,
sigma_bins=48 // size,
max_sigma=30,
).to(rank, non_blocking=True)
for p in dog.parameters():
p.requires_grad = False
dog.eval()
torch.cuda.synchronize(rank)
dogs = []
start = time.monotonic()
for i in range(10):
img_tensor = img_tensor.to(rank)
# torch.cuda.synchronize(rank)
dogs.append(dog(img_tensor))
end = time.monotonic()
print(end - start)
return dogs
def init_process(rank_size_fn, backend="nccl"):
rank, size, fn = rank_size_fn
""" Initialize the distributed environment. """
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "29500"
dist.init_process_group(backend, rank=rank, world_size=size)
return fn(rank, size)
if __name__ == "__main__":
set_start_method("spawn")
size = 2
pool = Pool(processes=size)
start = time.monotonic()
res = pool.map(init_process, [(i, size, run) for i in range(size)])
end = time.monotonic()
print(end - start)
pool.close()
size = 3
pool = Pool(processes=size)
start = time.monotonic()
res = pool.map(init_process, [(i, size, run) for i in range(size)])
end = time.monotonic()
print(end - start)
pool.close()
size = 4
pool = Pool(processes=size)
start = time.monotonic()
res = pool.map(init_process, [(i, size, run) for i in range(size)])
end = time.monotonic()
print(end - start)
pool.close()
# print(res)
thanks for helping me with this btw! |
st178642 | I conda installed opt_einsum but hit the following error. Is there a specific version of opt_einsum that I should use? The installed one is opt_einsum-3.2.1.
Traceback (most recent call last):
File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/scratch/shenli/pytorch/test.py", line 229, in init_process
return fn(rank, size)
File "/scratch/shenli/pytorch/test.py", line 215, in run
dogs.append(dog(img_tensor))
File "/scratch/shenli/pytorch/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/scratch/shenli/pytorch/test.py", line 89, in forward
f_gaussian_images = comp_mul(self.f_gaussian_pyramid, f_input)
File "/scratch/shenli/pytorch/test.py", line 185, in comp_mul
[op(x_real, y_real) - op(x_im, y_im), op(x_real, y_im) + op(x_im, y_real)],
File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/site-packages/opt_einsum/contract.py", line 473, in contract
operands, contraction_list = contract_path(*operands,
File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/site-packages/opt_einsum/contract.py", line 222, in contract_path
raise ValueError("Einstein sum subscript '{}' does not contain the "
ValueError: Einstein sum subscript 'buv' does not contain the correct number of indices for operand 1. |
st178643 | it’s because i got the dimensions of img_tensor wrong. i guess it should be (1,1000,1000). |
st178644 | I only have two GPUs, so I tested size == 1 and size == 2 using CUDA events. It looks like the forward pass of 2 GPUs are actually faster? I attached the code I am running below:
====== size = 1 ======
Iteration 0 forward latency is 340.7067565917969
Iteration 1 forward latency is 46.39555358886719
Iteration 2 forward latency is 46.37984085083008
Iteration 3 forward latency is 46.37712097167969
Iteration 4 forward latency is 46.3746223449707
Iteration 5 forward latency is 46.35868835449219
Iteration 6 forward latency is 46.370174407958984
Iteration 7 forward latency is 46.40425491333008
Iteration 8 forward latency is 46.36265563964844
Iteration 9 forward latency is 46.36454391479492
end - start = 0.7640056293457747
====== size = 2 ======
Iteration 0 forward latency is 336.1044616699219
Iteration 1 forward latency is 26.22003173828125
Iteration 2 forward latency is 27.49286460876465
Iteration 3 forward latency is 26.249248504638672
Iteration 4 forward latency is 26.69696044921875
Iteration 5 forward latency is 26.118335723876953
Iteration 6 forward latency is 27.30339241027832
Iteration 7 forward latency is 23.886367797851562
Iteration 8 forward latency is 23.869632720947266
Iteration 9 forward latency is 23.936511993408203
end - start = 0.5738828824833035
Iteration 0 forward latency is 312.13189697265625
Iteration 1 forward latency is 24.0633602142334
Iteration 2 forward latency is 23.685983657836914
Iteration 3 forward latency is 23.70742416381836
Iteration 4 forward latency is 23.703231811523438
Iteration 5 forward latency is 23.78976058959961
Iteration 6 forward latency is 23.779136657714844
Iteration 7 forward latency is 23.787424087524414
Iteration 8 forward latency is 23.791616439819336
Iteration 9 forward latency is 23.80246353149414
end - start = 2.9916703598573804
import math
import numbers
import os
import time
from functools import partial
from typing import Tuple
import numpy as np
import torch
import torch.distributed as dist
from opt_einsum import contract
from torch import nn
from torch.multiprocessing import set_start_method, Pool
class DifferenceOfGaussiansFFT(nn.Module):
def __init__(
self,
*,
img_height: int,
img_width: int,
min_sigma: int = 1,
max_sigma: int = 10,
sigma_bins: int = 50,
truncate: float = 5.0,
):
super(DifferenceOfGaussiansFFT, self).__init__()
self.img_height = img_height
self.img_width = img_width
self.signal_ndim = 2
self.sigma_list = np.concatenate(
[
np.linspace(min_sigma, max_sigma, sigma_bins),
[max_sigma + (max_sigma - min_sigma) / (sigma_bins - 1)],
]
)
sigmas = torch.from_numpy(self.sigma_list)
self.register_buffer("sigmas", sigmas)
# print("gaussian pyramid sigmas: ", len(sigmas), sigmas)
# accommodate largest filter
self.max_radius = int(truncate * max(sigmas) + 0.5)
max_bandwidth = 2 * self.max_radius + 1
# pad fft to prevent aliasing
padded_height = img_height + max_bandwidth - 1
padded_width = img_width + max_bandwidth - 1
# round up to next power of 2 for cheaper fft.
self.fft_height = 2 ** math.ceil(math.log2(padded_height))
self.fft_width = 2 ** math.ceil(math.log2(padded_width))
self.pad_input = nn.ConstantPad2d(
(0, self.fft_width - img_width, 0, self.fft_height - img_height), 0
)
self.f_gaussian_pyramid = []
kernel_pad = nn.ConstantPad2d(
# left, right, top, bottom
(0, self.fft_width - max_bandwidth, 0, self.fft_height - max_bandwidth),
0,
)
for i, s in enumerate(sigmas):
radius = int(truncate * s + 0.5)
width = 2 * radius + 1
kernel = torch_gaussian_kernel(width=width, sigma=s.item())
# this is to align all of the kernels so that the eventual fft shifts a fixed amount
center_pad_size = self.max_radius - radius
if center_pad_size > 0:
centered_kernel = nn.ConstantPad2d(center_pad_size, 0)(kernel)
else:
centered_kernel = kernel
padded_kernel = kernel_pad(centered_kernel)
f_kernel = torch.rfft(
padded_kernel, signal_ndim=self.signal_ndim, onesided=True
)
self.f_gaussian_pyramid.append(f_kernel)
self.f_gaussian_pyramid = nn.Parameter(
torch.stack(self.f_gaussian_pyramid, dim=0), requires_grad=False
)
def forward(self, input: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
img_height, img_width = list(input.size())[-self.signal_ndim:]
assert (img_height, img_width) == (self.img_height, self.img_width)
padded_input = self.pad_input(input)
f_input = torch.rfft(padded_input, signal_ndim=self.signal_ndim, onesided=True)
f_gaussian_images = comp_mul(self.f_gaussian_pyramid, f_input)
gaussian_images = torch.irfft(
f_gaussian_images,
signal_ndim=self.signal_ndim,
onesided=True,
signal_sizes=padded_input.shape[1:],
)
# fft induces a shift so needs to be undone
gaussian_images = gaussian_images[
:, # batch dimension
:, # filter dimension
self.max_radius: self.img_height + self.max_radius,
self.max_radius: self.img_width + self.max_radius,
]
return gaussian_images
def torch_gaussian_kernel(
width: int = 21, sigma: int = 3, dim: int = 2
) -> torch.Tensor:
"""Gaussian kernel
Parameters
----------
width: bandwidth of the kernel
sigma: std of the kernel
dim: dimensions of the kernel (images -> 2)
Returns
-------
kernel : gaussian kernel
"""
if isinstance(width, numbers.Number):
width = [width] * dim
if isinstance(sigma, numbers.Number):
sigma = [sigma] * dim
kernel = 1
meshgrids = torch.meshgrid(
[torch.arange(size, dtype=torch.float32) for size in width]
)
for size, std, mgrid in zip(width, sigma, meshgrids):
mean = (size - 1) / 2
kernel *= (
1
/ (std * math.sqrt(2 * math.pi))
* torch.exp(-(((mgrid - mean) / std) ** 2) / 2)
)
# Make sure sum of values in gaussian kernel equals 1.
kernel = kernel / torch.sum(kernel)
return kernel
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i: i + n]
def comp_mul(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
"""Complex multiplies two complex 3d tensors
x = (x_real, x_im)
y = (y_real, y_im)
x*y = (x_real*y_real - x_im*y_im, x_real*y_im + x_im*y_real)
Last dimension is x2 with x[..., 0] real and x[..., 1] complex.
Dimensions (-3,-2) must be equal of both a and b must be the same.
Examples
________
>>> f_filters = torch.rand((20, 1024, 1024, 2))
>>> f_imgs = torch.rand((5, 1024, 1024, 2))
>>> f_filtered_imgs = comp_mul(f_filters, f_imgs)
Parameters
----------
x : Last dimension is (a,b) of a+ib
y : Last dimension is (a,b) of a+ib
Returns
-------
z : x*y
"""
# hadamard product of every filter against every batch image
op = partial(contract, "fuv,buv->bfuv")
assert x.shape[-1] == y.shape[-1] == 2
x_real, x_im = x.unbind(-1)
y_real, y_im = y.unbind(-1)
z = torch.stack(
[op(x_real, y_real) - op(x_im, y_im), op(x_real, y_im) + op(x_im, y_real)],
dim=-1,
)
return z
def run(rank, size):
with torch.no_grad():
img_tensor = torch.rand((1, 1000, 1000))
dog = DifferenceOfGaussiansFFT(
img_height=1000,
img_width=1000,
sigma_bins=48 // size,
max_sigma=30,
).to(rank, non_blocking=True)
for p in dog.parameters():
p.requires_grad = False
dog.eval()
torch.cuda.synchronize(rank)
dogs = []
start = time.monotonic()
s = torch.cuda.current_stream(rank)
e_start = torch.cuda.Event(enable_timing=True)
e_finish = torch.cuda.Event(enable_timing=True)
for i in range(10):
img_tensor = img_tensor.to(rank)
# torch.cuda.synchronize(rank)
s.record_event(e_start)
dogs.append(dog(img_tensor))
s.record_event(e_finish)
e_finish.synchronize()
print(f"Iteration {i} forward latency is {e_start.elapsed_time(e_finish)}")
end = time.monotonic()
print("end - start = ", end - start)
torch.cuda.synchronize(rank)
return dogs
def init_process(rank_size_fn, backend="nccl"):
rank, size, fn = rank_size_fn
""" Initialize the distributed environment. """
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "29500"
dist.init_process_group(backend, rank=rank, world_size=size)
return fn(rank, size)
if __name__ == "__main__":
set_start_method("spawn")
size = 1
print("====== size = 1 ======")
pool = Pool(processes=size)
start = time.monotonic()
res = pool.map(init_process, [(i, size, run) for i in range(size)])
end = time.monotonic()
#print(end - start)
pool.close()
print("====== size = 2 ======")
size = 2
pool = Pool(processes=size)
start = time.monotonic()
res = pool.map(init_process, [(i, size, run) for i in range(size)])
end = time.monotonic()
#print(end - start)
pool.close()
# print(res)
import math
import numbers
import os
import time
from functools import partial
from typing import Tuple
import numpy as np
import torch
import torch.distributed as dist
from opt_einsum import contract
from torch import nn
from torch.multiprocessing import set_start_method, Pool
class DifferenceOfGaussiansFFT(nn.Module):
def __init__(
self,
*,
img_height: int,
img_width: int,
min_sigma: int = 1,
max_sigma: int = 10,
sigma_bins: int = 50,
truncate: float = 5.0,
):
super(DifferenceOfGaussiansFFT, self).__init__()
self.img_height = img_height
self.img_width = img_width
self.signal_ndim = 2
self.sigma_list = np.concatenate(
[
np.linspace(min_sigma, max_sigma, sigma_bins),
[max_sigma + (max_sigma - min_sigma) / (sigma_bins - 1)],
]
)
sigmas = torch.from_numpy(self.sigma_list)
self.register_buffer("sigmas", sigmas)
# print("gaussian pyramid sigmas: ", len(sigmas), sigmas)
# accommodate largest filter
self.max_radius = int(truncate * max(sigmas) + 0.5)
max_bandwidth = 2 * self.max_radius + 1
# pad fft to prevent aliasing
padded_height = img_height + max_bandwidth - 1
padded_width = img_width + max_bandwidth - 1
# round up to next power of 2 for cheaper fft.
self.fft_height = 2 ** math.ceil(math.log2(padded_height))
self.fft_width = 2 ** math.ceil(math.log2(padded_width))
self.pad_input = nn.ConstantPad2d(
(0, self.fft_width - img_width, 0, self.fft_height - img_height), 0
)
self.f_gaussian_pyramid = []
kernel_pad = nn.ConstantPad2d(
# left, right, top, bottom
(0, self.fft_width - max_bandwidth, 0, self.fft_height - max_bandwidth),
0,
)
for i, s in enumerate(sigmas):
radius = int(truncate * s + 0.5)
width = 2 * radius + 1
kernel = torch_gaussian_kernel(width=width, sigma=s.item())
# this is to align all of the kernels so that the eventual fft shifts a fixed amount
center_pad_size = self.max_radius - radius
if center_pad_size > 0:
centered_kernel = nn.ConstantPad2d(center_pad_size, 0)(kernel)
else:
centered_kernel = kernel
padded_kernel = kernel_pad(centered_kernel)
f_kernel = torch.rfft(
padded_kernel, signal_ndim=self.signal_ndim, onesided=True
)
self.f_gaussian_pyramid.append(f_kernel)
self.f_gaussian_pyramid = nn.Parameter(
torch.stack(self.f_gaussian_pyramid, dim=0), requires_grad=False
)
def forward(self, input: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
img_height, img_width = list(input.size())[-self.signal_ndim:]
assert (img_height, img_width) == (self.img_height, self.img_width)
padded_input = self.pad_input(input)
f_input = torch.rfft(padded_input, signal_ndim=self.signal_ndim, onesided=True)
f_gaussian_images = comp_mul(self.f_gaussian_pyramid, f_input)
gaussian_images = torch.irfft(
f_gaussian_images,
signal_ndim=self.signal_ndim,
onesided=True,
signal_sizes=padded_input.shape[1:],
)
# fft induces a shift so needs to be undone
gaussian_images = gaussian_images[
:, # batch dimension
:, # filter dimension
self.max_radius: self.img_height + self.max_radius,
self.max_radius: self.img_width + self.max_radius,
]
return gaussian_images
def torch_gaussian_kernel(
width: int = 21, sigma: int = 3, dim: int = 2
) -> torch.Tensor:
"""Gaussian kernel
Parameters
----------
width: bandwidth of the kernel
sigma: std of the kernel
dim: dimensions of the kernel (images -> 2)
Returns
-------
kernel : gaussian kernel
"""
if isinstance(width, numbers.Number):
width = [width] * dim
if isinstance(sigma, numbers.Number):
sigma = [sigma] * dim
kernel = 1
meshgrids = torch.meshgrid(
[torch.arange(size, dtype=torch.float32) for size in width]
)
for size, std, mgrid in zip(width, sigma, meshgrids):
mean = (size - 1) / 2
kernel *= (
1
/ (std * math.sqrt(2 * math.pi))
* torch.exp(-(((mgrid - mean) / std) ** 2) / 2)
)
# Make sure sum of values in gaussian kernel equals 1.
kernel = kernel / torch.sum(kernel)
return kernel
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i: i + n]
def comp_mul(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
"""Complex multiplies two complex 3d tensors
x = (x_real, x_im)
y = (y_real, y_im)
x*y = (x_real*y_real - x_im*y_im, x_real*y_im + x_im*y_real)
Last dimension is x2 with x[..., 0] real and x[..., 1] complex.
Dimensions (-3,-2) must be equal of both a and b must be the same.
Examples
________
>>> f_filters = torch.rand((20, 1024, 1024, 2))
>>> f_imgs = torch.rand((5, 1024, 1024, 2))
>>> f_filtered_imgs = comp_mul(f_filters, f_imgs)
Parameters
----------
x : Last dimension is (a,b) of a+ib
y : Last dimension is (a,b) of a+ib
Returns
-------
z : x*y
"""
# hadamard product of every filter against every batch image
op = partial(contract, "fuv,buv->bfuv")
assert x.shape[-1] == y.shape[-1] == 2
x_real, x_im = x.unbind(-1)
y_real, y_im = y.unbind(-1)
z = torch.stack(
[op(x_real, y_real) - op(x_im, y_im), op(x_real, y_im) + op(x_im, y_real)],
dim=-1,
)
return z
def run(rank, size):
with torch.no_grad():
img_tensor = torch.rand((1, 1000, 1000))
dog = DifferenceOfGaussiansFFT(
img_height=1000,
img_width=1000,
sigma_bins=48 // size,
max_sigma=30,
).to(rank, non_blocking=True)
for p in dog.parameters():
p.requires_grad = False
dog.eval()
torch.cuda.synchronize(rank)
dogs = []
start = time.monotonic()
s = torch.cuda.current_stream(rank)
e_start = torch.cuda.Event(enable_timing=True)
e_finish = torch.cuda.Event(enable_timing=True)
for i in range(10):
img_tensor = img_tensor.to(rank)
# torch.cuda.synchronize(rank)
s.record_event(e_start)
dogs.append(dog(img_tensor))
s.record_event(e_finish)
e_finish.synchronize()
print(f"Iteration {i} forward latency is {e_start.elapsed_time(e_finish)}")
end = time.monotonic()
print("end - start = ", end - start)
torch.cuda.synchronize(rank)
return dogs
def init_process(rank_size_fn, backend="nccl"):
rank, size, fn = rank_size_fn
""" Initialize the distributed environment. """
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "29500"
dist.init_process_group(backend, rank=rank, world_size=size)
return fn(rank, size)
if __name__ == "__main__":
set_start_method("spawn")
size = 1
print("====== size = 1 ======")
pool = Pool(processes=size)
start = time.monotonic()
res = pool.map(init_process, [(i, size, run) for i in range(size)])
end = time.monotonic()
#print(end - start)
pool.close()
print("====== size = 2 ======")
size = 2
pool = Pool(processes=size)
start = time.monotonic()
res = pool.map(init_process, [(i, size, run) for i in range(size)])
end = time.monotonic()
#print(end - start)
pool.close()
# print(res) |
st178645 | @mrshenli okay thanks I’ll try this. But also this means all of my other timings have been wrong. So thanks for showing me how to use cuda events too.
edit:
@mrshenli it turns out you need a synchronize after all
start = time.monotonic()
s = torch.cuda.current_stream(rank)
e_start = torch.cuda.Event(enable_timing=True)
e_finish = torch.cuda.Event(enable_timing=True)
s.record_event(e_start)
for i in range(10):
img_tensor = img_tensor_cpu.to(rank)
# torch.cuda.synchronize(rank)
dogs.append(dog(img_tensor))
torch.cuda.synchronize(rank)
s.record_event(e_finish)
e_finish.synchronize()
end = time.monotonic()
gives me this
====== size = 1 ======
rank 0 Iteration 9 forward latency is 1283.0596923828125
end - start = 1.2832060940563679
====== size = 2 ======
rank 0 Iteration 9 forward latency is 626.5835571289062
end - start = 0.6267032357864082
rank 1 Iteration 9 forward latency is 640.3717041015625
end - start = 0.6404897100292146
====== size = 3 ======
rank 0 Iteration 9 forward latency is 443.1278076171875
end - start = 0.44322703895159066
rank 1 Iteration 9 forward latency is 471.8766174316406
end - start = 0.47198665188625455
rank 2 Iteration 9 forward latency is 461.29559326171875
end - start = 0.46140363393351436
====== size = 4 ======
rank 0 Iteration 9 forward latency is 397.9264221191406
end - start = 0.3981346560176462
rank 2 Iteration 9 forward latency is 374.9112243652344
end - start = 0.3749916541855782
rank 3 Iteration 9 forward latency is 360.9978942871094
end - start = 0.3610941809602082
rank 1 Iteration 9 forward latency is 362.57073974609375
end - start = 0.3626508240122348 |
st178646 | I have a model that I would like to parallelize so that inference would be much faster. The input vector that I have to this network is at one point 195xHxW. My network then reshapes it -1x3xHxW, which should normally work (65x3xHxW). But because DataParallel wraps the network it divides the 195xHxW tensor into n pieces, where n is the amount of gpus. However, when dividing the tensor it does it in such a way that the last tensor can no longer be reshaped. Is there a way to get DataParallel to work with the model so that the tensor can still be properly reshaped? |
st178647 | Solved by rvarm1 in post #2
I’m assuming that since your tensor of shape 195HW is a single training example, you don’t want it to be split by N GPUs, since 195 is not the batch size in this case? If you pass in your training examples in the form such as batch_size * C * H * W for example then DataParallel/DDP should divide alo… |
st178648 | I’m assuming that since your tensor of shape 195HW is a single training example, you don’t want it to be split by N GPUs, since 195 is not the batch size in this case? If you pass in your training examples in the form such as batch_size * C * H * W for example then DataParallel/DDP should divide along the batch size dim.
Could you potentially paste a reproduction of this issue and the associated error message that you get?
Btw, if you are using utilities provided by PyTorch such as the DataLoader you can configure the drop_last argument to ensure that batch sizes are even. |
st178649 | When using torch.nn.parallel.data_parallel for distributed training, models are copied onto multiple GPU’s and can complete a forward pass without the copied models effecting each other. However, how do the copied models interact during the backward pass? How are the model weights updated on each GPU?
When reading the documentation [1] I see the explanation:
“gradients from each replica are summed into the original module”
but I’m not sure how to interpret this statement. My best guess is that each GPU’s model is updated with the sum of all GPU’s model’s gradients, which I would then interpret that there is locking across GPU’s so they each start training on a new mini-batch only after they all finish processing their current mini-batch.
[1] https://pytorch.org/docs/master/nn.html?highlight=dataparallel#dataparallel-layers-multi-gpu-distributed 41 |
st178650 | Solved by albanD in post #4
Yes the locking is builtin and the weights will properly be updated before they are used. |
st178651 | Hi,
Each gpu compute the gradients for it’s part of the batch. Then they are accumulated on the “main” model where the weight update is done. Then this “main” model shares it’s weight to all the other gpus so that all models have the same weight. |
st178652 | Hi Alban,
Thanks for clarifying. Does this mean that this parallelization utilizes locking, ensuring that each GPU model updates its weights from the “main” model before moving on to the next mini-batch? |
st178653 | Yes the locking is builtin and the weights will properly be updated before they are used. |
st178654 | Hey, I am facing somme issue with data parallel. I am training on 4 v100 with a batchsize of 1. The time of forward pass seems to scale but the time of backward pass is taking 4* times in comparison to 1 V100. So there is no significant boost in speed when using 4 gpus. I guess the backward pass is taking place on single gpu. I am using nn.parallel.DataParallel, is there any solution to this problem? I can share more details if you want. |
st178655 | Hey @sanchit2843
When using batch_size == 1, DataParallel won’t be able to parallelize the computation, as the input cannot be chunked by the scatter linked below.
github.com
pytorch/pytorch/blob/df8d6eeb19423848b20cd727bc4a728337b73829/torch/nn/parallel/data_parallel.py#L151
def forward(self, *inputs, **kwargs):
if not self.device_ids:
return self.module(*inputs, **kwargs)
for t in chain(self.module.parameters(), self.module.buffers()):
if t.device != self.src_device_obj:
raise RuntimeError("module must have its parameters and buffers "
"on device {} (device_ids[0]) but found one of "
"them on device: {}".format(self.src_device_obj, t.device))
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
if len(self.device_ids) == 1:
return self.module(*inputs[0], **kwargs[0])
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
outputs = self.parallel_apply(replicas, inputs, kwargs)
return self.gather(outputs, self.output_device)
def replicate(self, module, device_ids):
return replicate(module, device_ids, not torch.is_grad_enabled())
def scatter(self, inputs, kwargs, device_ids):
but the time of backward pass is taking 4* times in comparison to 1 V100
Can you share a minimum repro of this? In general, the 4X slowdown is possible due to Python GIL contention (which you can avoid by using DistributedDataParallel). But I feel it this should not be applicable here as each batch only contains one sample, and the replicated models are not really involved in forward and backward. |
st178656 | Hey thanks for the answer, actually I think it is because of all 4 backpropogation going with one gpu only. Is there any solution with which I can run it with one batch size. And what is the meaning of repro? Thanks in advance. |
st178657 | I tried with batchsize of 2 as well. The epoch time with single gpu is 2.5 hours and with four gpu is 3.3 hrs. Any solution will be helpful. |
st178658 | Hi All!
I wanted to use a CosineAnnealingWithRestarts schedulers on Google Cloud TPU cores, with distributed parallel training.
I couldn’t find a tutorial which shows how to do that.
Please help me out. |
st178659 | Hey @chhablanig
Do you have a working training script without using LR schedulers?
cc @vincentqb for optimizer question
cc @ailzhang for XLA question |
st178660 | dataloader = ....(create ddp loader with ddp settings)
opts = ...parse() # user options
master = opts.local_rank == 0
model = create_model(opt)
model_ema = model.clone().eval() # keeping track of exponential moving average for model's weights
for data in dataloader():
# typical training code ... forward, backward and the likes
update_ema_weights(model_ema, model.state_dict()) # update the weights for model's team
if opt.validate:
if master:
for data in valid_dataloader():
output = model_ema(data).... # typical validate code
torch.cuda.synchronize()
given the above pseudo-code, after validation, my DDP process will hang on all GPUs.
However, if I use model instead of model_ema for validation, it will not. Does anyone know how to fix this? |
st178661 | Would I be correct if I assume model (and model_ema as well) is a DistributedDataParallel instance? If so, the forward method of DistributedDataParallel will set some internal flags, which could cause hang.
If you just want to evaluate, you can use model.module to retrieve the original non-DDP module. And then clone, set eval(), run forward on the original non-DDP module. This should keep DDP states intact. |
st178662 | I see. I will give it a shot. But it doesn’t explain why if I use “model” to evaluate, no hang occur. |
st178663 | Good point. How did you implement the model.clone() method? IIUC, neither nn.Module nor DistributedDataParallel has a clone() method. Is this a copy.deepcopy? |
st178664 | I am trying to implement parallelization as follows but not sure if it is possible.
For example, train data with multiple processes (CPU cores). Have each process deal with independent batches. Instead of taking the optimization step independently for each batch, I want to gather loss and gradient from all processes and only take optimization step on Process 0.
Is that possible to do that with torch.distributed package?
Currently, I followed instructions from https://pytorch.org/tutorials/intermediate/dist_tuto.html 1. And have it work. |
st178665 | Hey @stclaireva
Is that possible to do that with torch.distributed package?
This is possible. You can
Run forward-backward locally.
Use all_gather 1 or all_gather_coalesced 1 to collect all gradients into rank 0.
Manually add/average those gradients into param.grad on rank 0.
Run optimizer.step() on rank 0 to update parameters.
Let rank 0 broadcast the updated parameters to other ranks.
go to step 1 to start a new iteration
Curious, why do you need the above algorithm instead of letting DistributedDataParallel handle it for you? |
st178666 | I want to use nn.parallel.DistributedDataParallel to train my model on single machine with 8 2080TI GPUs. I set distributed config as torch.distributed.init_process_group(backend=‘nccl’, init_method=‘tcp://localhost:1088’, rank=0, world_size=1)
However, no GPU works in train. And if I use nn.DataParallel, this problem is not existence.
[email protected]×1264 411 KB |
st178667 | DistributedDataParallel (DDP) is multi process training. For you case, you would get best performance with 8 DDP processes, where the i-th process calls:
torch.distributed.init_process_group(
backend=‘nccl’,
init_method=‘tcp://localhost:1088’,
rank=i,
world_size=8
)
Or, you could set env vars and use https://github.com/pytorch/pytorch/blob/master/torch/distributed/launch.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.