id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st177868 | I’m wondering if anyone has some insight into the effects of calling DDP twice for one process group initialization? Good example of this would be a GAN where there are two distinct models. Can they both safely be wrapped in DDP? I suppose a dummy container module could be made that encases both models and only requires a single DDP wrapper. I looked around but didn’t see anything posted about this. |
st177869 | Using the same process group for multiple DDP wrapped modules may work, only if they are independently used, and a call to backwards doesn’t involve both models. For GANs this may be the case, where you alternate between training the discriminator and the generator. That said, if a single call to backward involves gradient accumulation for more than 1 DDP wrapped module, then you’ll have to use a different process group for each of them to avoid interference.
You can do this as follows:
pg1 = torch.distributed.new_group(range(torch.distributed.get_world_size()))
model1 = DistributedDataParallel(
create_model1(),
device_ids=[args.local_rank],
process_group=pg1)
pg2 = torch.distributed.new_group(range(torch.distributed.get_world_size()))
model2 = DistributedDataParallel(
create_model2(),
device_ids=[args.local_rank],
process_group=pg2) |
st177870 | This is interesting. With most GAN architectures, the backward pass of the generator does indeed include both G and D. Generally you will get some prediction from the discriminator using the fake samples and then call backward on that, propagating that output through D, then G. It may actually be a requirement then that GANs use separate process groups for each model. Granted, in this architecture, weight updates only occur for one model at a time, but gradients should be accumulated for both D and G during the G backward/update stage. |
st177871 | If it is indeed the case that GANs need separate process groups for G and D, then that is something that definitely needs to be in the docs. I’ve had some strange training results while using DDP and this may be the cause.
@pietern Do you know if the interference you speak of would cause an exception, or just produce incorrect gradients?
I want to put together a test bed for this and see if there are indeed different gradients when using 1 vs 2 PGs. |
st177872 | @mdlockyer I think bad behavior would result in crashes or hangs. Calls to allreduce that are mixed and matched with different dimensionality across processes is a recipe for out of bound memory access and undefined behavior.
Would it be possible to use autograd.grad for the discriminator part and autograd.backward for the generator part? This would avoid gradient accumulation and reduction for the discriminator. You’d still have to stitch them together at the boundary of course. |
st177873 | @pietern that’s an interesting idea! Only backward() will trigger the reducer right? Off the top of your head, what would the stitch look like between grad and backward? |
st177874 | Yes, only backward() interacts with the reducer.
I imagine combining grad and backward like this (but YMMV and I don’t know if this works).
G_out_grad = torch.autograd.grad(D_loss, G_out)
torch.autograd.backward(G_out, G_out_grad)
This wouldn’t trigger any reduction for the discriminator and only do so for the generator. |
st177875 | Hi! Is this the only adjustment to make DDP work with two separate models?
I am trying to train two models in two stages: I train the first DDP model A for k steps, then I use its encoder part with torch.no_grad() and then use its projection layers and a second DDP model B.
For this I use two data loaders with distributed data samplers and two optimizers, each watching only one model.
So in the training loop I have:
for k steps:
pred = modelA(X)
loss = lossA(pred, y)
loss.backward()
optimizerA.step()
....
for m steps:
with torch.no_grad():
x = modelA.module.encoder()
x = modelA.module.proj(x)
pred = modelB(x)
loss = lossB(pred, y)
loss.backward()
optimizerA.step()
optimizerB.step()
If I do not use 2 pg I get a CUDA memory access problems as you said, but with 2 pgs it just seems to hang after training modelA for some amount of steps
What should I do? |
st177876 | I use a single process group for both generator and discriminator.
An example here 109.
You can just wrap each module with DDP separately and then use them as normal models. |
st177877 | I was using Distributed DataParallel to train a model. I ran my code on two processes with two GPUs (one process one GPU). After I pressed ‘Ctrl + C’ in terminal, one process was shut down and the other one remains running. This can be observed by top or nvidia-smi command. So how to shut down all the processes in my terminal? |
st177878 | How did you launch the two processes? Did you use torch.distributed.launch or some other mechanism? |
st177879 | @ pritamdamania87 Yes, I use python -m torch.distributed.launch to run my code. And with Ctrl+C to shut down the training, some processes are not closed. I must kill them manually. |
st177880 | This is an issue in many multiprocessing tasks, not just PyTorch. The best that you can do is iteratively calling ctrl+C until all processes are terminated. |
st177881 | Let’s consider the batch size is 64 and we want to run in a cluster of two nodes, where each node contains 4 GPUs. How the batch will be split? Will the batch first split into two and thus each node will get a batch of data size 32, and finally, each node will split the data among the four GPUs, thus each GPU will get a batch of data size 8? is this the way the will be split in DistributedDataParallel mode?
Thanks in advance for the clarification. |
st177882 | Solved by mrshenli in post #4
In this case, as each node has 4 GPUs, each node will launch 4 processes with each process creating its own dataloader and DDP instance. So, each node will actually have 4 data loaders.
If you would like to run batch size of 64 across 2 node (8 gpus), then each data loader should load data size of … |
st177883 | If using the recommended DistributedDataParallel (DDP) mode, where there is a dedicated process for each GPU, DDP does not split input data. Each process will have its own data loader and its own DDP instance. DDP only help to automatically compute the global averaged gradient in the backward pass. DDP will run in this mode when device_ids only contains a single device, or there is only one visible device. See this 46 for more details. |
st177884 | Thanks for the clarification. Just to confirm my understanding: that means each node will work with a separate data loader and thus each node the batch size will be 64, am I right? |
st177885 | In this case, as each node has 4 GPUs, each node will launch 4 processes with each process creating its own dataloader and DDP instance. So, each node will actually have 4 data loaders.
If you would like to run batch size of 64 across 2 node (8 gpus), then each data loader should load data size of 64 / 8 = 8. |
st177886 | The above information is right. @akashs @mrshenli More details as following:
Actually, if your batch size is 64, and use two-node to process data. It actually uses two 32 data loaders to parallel feed the data. And, when you config your training hyperparameters, you should set the batch size is 32 if you know that want to use 2 nodes for training.
Here are the explanations. Let us print the len(data_loader), you will see clearly.
I use input config batch size 256 to train ImageNet. ImageNet have total 1,281,167 images. If using 256 for one node. it actually have 1,281,167 / 256 = 5,004.55859375 mini-batches. Here is the log info, it has 5025 mini-batches(I use 3 GPUs in this node, so the number is not 5,005)
[2020-09-29 15:03:12,875]-[SpawnProcess-1]-[MainThread]-[INFO]: ['Epoch: [42][ 50/5025]', 'Time 8.593 ( 4.986)', 'Data 0.000 ( 1.278)', 'Loss 9.9792e-01 (5.8119e-01)', 'Acc@1 78.82 ( 83.85)', 'Acc@5 90.59 ( 97.07)']
Now, if we use 2 nodes to process data (each node with the same GPUs), and I use the same config (batch size is 256) to train ImageNet. you will see total minibatch is decreased by 2, which means there are 2 X 256 batch size. Here is the log info, it has 2513mini-batches.
[2020-09-30 16:51:24,623]-[SpawnProcess-2]-[MainThread]-[INFO]: ['Epoch: [45][1150/2513]', 'Time 4.003 ( 4.173)', 'Data 0.078 ( 0.088)', 'Loss 2.8992e-01 (3.7627e-01)', 'Acc@1 92.94 ( 90.02)', 'Acc@5 98.82 ( 98.62)']
Note, you can not consider as 256 batch size above example (even if your code config is 256 batch size), it is actually 512 batch size to train your model because there are two 256 data loaders to parallel feed the data. This is very important because sometimes will affect your model training.
Hope it will answer your question! |
st177887 | I notice that in https://github.com/pytorch/examples/tree/master/distributed/rpc/pipeline, every input batch is divided into micro-batches. This sorta likes GPipe method. But the backward pass is not parallelized, which contradicts GPipe. I wonder if there is any papers or references that can explain what pipeline algorithm are being used.
I wonder if it works like this gantt chart.
PastedGraphic-41370×282 28.9 KB |
st177888 | Solved by mrshenli in post #3
Yep, the first chart looks correct to me.
If so, is there any chances to enhance it? Makes it more like this chart below.
Do you need single-machine multi-GPU pipeline parallel or multi-machine pipeline parallel?
If it is within a single machine, it is possible to parallelize backward as well… |
st177889 | If so, is there any chances to enhance it? Makes it more like this chart below.
WeChat03357ce5fb12f7ef959d4a752be604f11504×310 104 KB |
st177890 | Yep, the first chart looks correct to me.
If so, is there any chances to enhance it? Makes it more like this chart below.
Do you need single-machine multi-GPU pipeline parallel or multi-machine pipeline parallel?
If it is within a single machine, it is possible to parallelize backward as well. Check this project torchgpipe 1. It inserts phony dependencies between stages of different micro batches.
If it’s multi-machine pipeline parallel, then you will need RPC. As of today distributed autograd cannot parallelize backward, because the smart mode 3 has not been implemented yet. To get around this, you can still use RPC and RRef, but cannot use distributed autograd and will need to manually stitch together local autograd.
Another possibility (not 100% sure if this would work) is to create one distributed autograd context per micro batch, and manually call __enter__ and __exit__ on distributed autograd context. As a result, the gradients for different microbatches will be stored in different contexts, and hence you will need to call dist_optimizer.step(ctx_id) multiple times to apply the gradients.
Things will become a lot easier when we add smart mode distributed autograd. |
st177891 | Thanks for your replying! This help me a lot.
I will keep trying to enhance the training speed by utilizing pipeline. |
st177892 | I’m trying to do my training on multiple GPUs by using the following code (the latest pytorch version):
from torchvision import models
model = model.vgg16(pretrained=True)
model.classifier._modules['6'] = torch.nn.Linear(4096, 10)
self.model = torch.nn.DataParallel(model, device_ids=[0,1,2]).cuda()
self.model = model.to(f'cuda:0')
...
def forward(self, input_data):
output = self.model.forward(input_data)
I get this error when I call self.model.forward(input_data) :
File "/home/poahmadvand/py3env/lib/python3.7/site-packages/torchvision/models/vgg.py", line 43, in forward
x = self.features(x)
File "/home/poahmadvand/py3env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/poahmadvand/py3env/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/home/poahmadvand/py3env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/poahmadvand/py3env/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "/home/poahmadvand/py3env/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 0 does not equal 1 (while checking arguments for cudnn_convolution)
How can I fix this error? thanks. |
st177893 | I can’t seem to reproduce the issue that you’re seeing. This code works fine on PyTorch 1.6:
from torchvision import models
import torch
model = models.vgg16(pretrained=True)
model.classifier._modules['6'] = torch.nn.Linear(4096, 10)
model = torch.nn.DataParallel(model, device_ids=[0,1,2]).cuda()
model = model.to(f'cuda:0')
input_data = torch.rand(10, 3, 225, 225)
model(input_data)
Am I missing something here? |
st177894 | @pouya.ahmadvand Do you run into the same error even if you run the code I pasted above? |
st177895 | @pritamdamania87 Thanks, it now works. There problem was that I used a function to fetch the number of GPUs with the highest free memory available, and this function returns an array like [5, 6, 7]. Then:
selected_gpus = [5, 6, 7]
model = torch.nn.DataParallel(model, device_ids=selected_gpus).cuda()
model = model.to(f'cuda:{selected_gpus[0]}')
which gives me that error. now I’m using the return array of gpu selector and set the system variable CUDA_VISIBLE_DEVICES by it and then:
selected_gpus = [5, 6, 7]
os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(str(x) for x in selected_gpus)
model = torch.nn.DataParallel(model, device_ids=range(0,len(selected_gpus)).cuda()
model = model.to(f'cuda:0')
It works fine now.
Thanks! |
st177896 | I found a strange thing. When I use two GPUs, the memory occupied in the two GPUs is the same. But when I use 4 or 6 GPUs, the memory occupied in GPUs is not exactly the same.
Has anyone ever been in this situation?
image859×497 47 KB |
st177897 | A device might run a bit ahead of the others and could thus create another memory footprint.
Also, if you didn’t set e.g. cudnn to the deterministic mode, the kernel selection might slightly vary between the devices. |
st177898 | Hi @ptrblck, Thanks for your quick reply! I have set cudnn to the deterministic mode. Maybe this is due to the different speeds between each GPU. Will this case affect the training process? |
st177899 | DDP synchronizes the devices if necessary and communicates the gradients etc. between the GPUs.
If one device is a ms faster, it would have to wait, but besides that you shouldn’t see any effects. |
st177900 | I am trying to run the script mnist-distributed.py from Distributed data parallel training in Pytorch 5. I have also pasted the same code here. (I have replaced my actual MASTER_ADDR with a.b.c.d for posting here).
import os
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=1, type=int, metavar='N')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
os.environ['MASTER_ADDR'] = 'a.b.c.d'
os.environ['MASTER_PORT'] = '8890'
mp.spawn(train, nprocs=args.gpus, args=(args,))
def train(gpu, args):
rank = args.nr * args.gpus + gpu
dist.init_process_group(
backend='nccl',
init_method='env://',
world_size=args.world_size,
rank=rank
)
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 100
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model,
device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(
root='./data',
train=True,
transform=transforms.ToTensor(),
download=True
)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=args.world_size,
rank=rank
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
total_step = len(train_loader)
for epoch in range(args.epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(
epoch + 1,
args.epochs,
i + 1,
total_step,
loss.item())
)
if __name__ == '__main__':
main()
There are 2 nodes with 2 GPUs each. I run this command from the terminal of the master node-
python mnist-distributed.py -n 2 -g 2 -nr 0
, and then this from the terminal of the other node-
python mnist-distributed.py -n 2 -g 2 -nr 1
But then my process gets stuck with no output on either terminal.
Running the same code on a single node using the following command works perfectly fine-
python mnist-distributed.py -n 1 -g 2 -nr 0 |
st177901 | Can you verify that a.b.c.d is reachable on port 8890 from the other node? Also, it would help to understand where exactly the process gets stuck. You could add a few print statements around init_process_group to check if the initialization is getting stuck. |
st177902 | I am trying to implement model parallel using torch.distributed(DistributedDataParallel), and I am wondering is there a tutorial for that (single node multiple GPUs)? I know nn.DataParallel is easier to use, however, I use another package that only support torch.distributed. Thanks! |
st177903 | Yep, here is the tutorial: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#combine-ddp-with-model-parallelism 11
It should work if you place different layers of the model on different GPUs and pass it to DDP. DDP should be able to detect it is a multi-GPU model. One caveat is that, please make sure no GPUs are shared across processes. |
st177904 | I followed the tutorial online, however I got the error message RuntimeError: Model replicas must have an equal number of parameters. in model = torch.nn.parallel.DistributedDataParallel(model)
Any idea what might cause this issue? Thanks! |
st177905 | @xdwang0726 Do you see this error when using the code in the tutorial? Or is it some custom code based on the tutorial? If it is the latter, could you provide a minimal repro of the issue? |
st177906 | Fake Distributed on 1 GPU
I have big samples, so I can’t use a big batch size. I virtually increase the batch size simply by calling the optimizer.step() every N batches. However, that of course doesn’t help the statistics of BatchNorm that are calculated per batch, and suffer from that. There is only so much I can do with the batchnorm momentum… I would like to simulate a distributed system on 1 GPU and sync the BN layers across multiple fake-parallel batches.
Is that possible? |
st177907 | Is using multiple GPUs to increase the effective batch size not an option for you? |
st177908 | eg. I have and nn.Linear whose in_features = 500e4 and out_features = 3000, so number of trainable parameters consume 55 GB (500e4 * 3000 * 4 / 1024 ** 2) memory. My single GPU has 12 GB and I have 8 GPU on one machine. How can I parallel this “atomic” built-in module on multiple gpu? |
st177909 | Solved by albanD in post #2
Hi,
I’m afraid we don’t provide any construct to do this automatically.
But you can simply create 8 different Linear that each take a subset of the input and split the input yourself and call each of these Linears and then add all the results (assuming your split on the input size here given that … |
st177910 | Hi,
I’m afraid we don’t provide any construct to do this automatically.
But you can simply create 8 different Linear that each take a subset of the input and split the input yourself and call each of these Linears and then add all the results (assuming your split on the input size here given that it is the biggest). |
st177911 | Is it possible to lazy-generate a larger-than-memory random matrix with pytorch? Ideally I am looking to generate a 1e9 x 5000 matrix and compute the scalar product with another 5k x 5k matrix.
I had a look at keops, but I didn’t manage to get too far with that.
Is it be possible to do this in a distributed manner, for example by interfacing somehow with pyspark? |
st177912 | Hey @Luca_Mingarelli
torch.distributed package can help to distribute tensors across multiple nodes, but, as of today, you still need to implement the distribute matrix multiplication on your own. If you are looking for features like Mesh-TensorFlow. It is not yet available.
This is a very interesting request. Do you mind share some some details of the application? |
st177913 | Hi @mrshenli,
Say I want to perform N monte carlo simulations each consisting in a dot product of a vector of length M (M=5k) times a square matrix M x M. One way is a very lengthy and never ending loop. A matrix multiplication is however much faster. This is particularly useful in credit risk, to estimate distributions of losses. With numpy you can only use an in-between solution by making batches, but being able to lazily perform such operation gives massive advantages. Currently I solved the problem by a combination of xarray and dask, but would be great to see this in pytorch as well. |
st177914 | rpc.init_rpc('env_{}'.format(rank), rank=rank, world_size=opt.world_size, rpc_backend_options=rpc.ProcessGroupRpcBackendOptions(rpc_timeout=100))
if I use 128 nodes, it works.
but when I use 1024 nodes (32 servers * 32 processes or 16 servers * 64 processes), I meet
...
RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:184] listen: Address already in use
my environment is pytorch 1.6.0, python 3.7, cuda 10.1.
is there anyone have met it before ? |
st177915 | Hey @yueyilia, I haven’t seen this error before. Which version of PyTorch are you using? |
st177916 | Does tensorpipe RPC backend 2 work in this case? It is still experimental, but we plan to make it the default backend and focus on it in future releases.
Regarding the error, I suspect this is a limitation in Gloo. I don’t have enough resource to verify this locally. If possible, can you try if init_process_group 2 and all_reduce work with 1024 nodes? |
st177917 | I replace process group with tensorpipe, I also meet
...
RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:184] listen: Address already in use
It seems that tensorpipe still depends on gloo.
I run init_process_group and all_reduce, and the error is the same. |
st177918 | It seems that process group needs too many ports. It will establish a connection between every two nodes, so that each server needs 32*1024 ports (32 servers * 32 processes) to establish a tcp connection. Do you have any plans to optimize it? |
st177919 | It seems that tensorpipe still depends on gloo.
It’s true for now, but only for initialization and shutdown. We will remove its dependency on gloo in future releases. (hopefully in v1.8)
It seems that process group needs too many ports. It will establish a connection between every two nodes, so that each server needs 32*1024 ports (32 servers * 32 processes) to establish a tcp connection. Do you have any plans to optimize it?
Good catch! I am not aware of any plan to improve gloo for this. cc @pbelevich |
st177920 | For the RPC Framework it seems like this is happening since Gloo creates a tcp connection for all combination of processes in the group.
I’m wondering if this can be avoid in TensorPipe where the TCP connections are created on demand and kept in a pool for reuse. Typically in a RPC environment, we’re not talking to all the nodes in the group at the same time.
@yueyilia Could you add some details about your use case for RPC here? Are all the nodes (1024) communicating with all the other nodes in your application at the same time? Is it possible to run 1 process per server in your application to get around this in the short term? If GIL is currently the bottleneck, there is some amount of TorchScript support in the RPC framework that might help getting round GIL. |
st177921 | TensorPipe would fare better if indeed your topology (i.e., the links you actually use) are a significantly smaller subset than the complete graph. (For example, if you use a server-client pattern). In other words, if your graph is sparse. For dense (near-complete) graphs TensorPipe will perform even worse than Gloo because each pipe will internally use multiple TCP connections, whereas Gloo uses only one.
The reason you’re currently unable to use TensorPipe is because, indeed, it uses Gloo internally for the join step. We’ve been wanting to get rid of this for a while but it’s hard, because the RPC agent’s join must do a barrier, and it’s easier to do it through a library that already does collectives (namely Gloo) rather than re-implement it. We could use the c10d::Store instead of the ProcessGroup for that, but currently the Store isn’t powerful enough. @osalpekar was thinking of refactoring it though so maybe then we could do this change. See https://github.com/pytorch/pytorch/issues/42879 2 and https://github.com/pytorch/pytorch/issues/41614 for more context. |
st177922 | I’m trying below sample on multiple machines:
github.com
pytorch/examples/blob/master/distributed/rpc/parameter_server/rpc_parameter_server.py 2
import argparse
import os
import time
from threading import Lock
import torch
import torch.distributed.autograd as dist_autograd
import torch.distributed.rpc as rpc
import torch.multiprocessing as mp
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
from torch.distributed.optim import DistributedOptimizer
from torchvision import datasets, transforms
# --------- MNIST Network to train, from pytorch/examples -----
class Net(nn.Module):
def __init__(self, num_gpus=0):
This file has been truncated. show original
When I tried on 2 linux machine in same domain, this sample works fine. But when I try on 2 WSL machine, I can’t make these 2 WSL machine connect with each other, both machines blocking waiting on rpc init.
Tried the " WSL 2 TPC NETWORK FORWARDING" workaround from below post, rpc still can’t connect with each other:
github.com/microsoft/WSL
[WSL 2] NIC Bridge mode 🖧 (Has TCP Workaround🔨) 1
opened
Jun 16, 2019
jkasten2
Issue
WSL 2 seems to NAT it's virtual network, instead of making it bridged to the host NIC. My goal is for...
network
So I’m wondering if there is any working sample/setting for WSL? Any suggestions on how to trouble the issue is also highly appreciated, thanks. |
st177923 | We haven’t tests WSL (I assume you mean Windows Subsystem for Linux), and are not sure what are the gaps if there are any.
cc @ptrblck do you know who is familiar with WSL?
Not sure if this can help, I would try to verify if the following command resolves to the correct interface. If not, set GLOO_SOCKET_IFNAME 1 explicitly.
getent hosts `hostname` |
st177924 | @mrshenli Thanks for your reply.
Yes, we are using Windows Subsystem for Linux as Windows not support distributed yet. Although I see an open PR for this (but we don’t know if rpc will be support in this PR): https://github.com/pytorch/pytorch/pull/42897 5
And yes, hostname can resolve correctly. One thing that different on linux machine and WSL machine we are aware of is WSL machine share ip with host windows system but need port forwarding from windows to WSL. We already tried this solution that add port forwarding but seems rpc still can’t connect.
https://github.com/microsoft/WSL/issues/4150 2
So question is say rank0 is listening on one port 12560 for example, is this the only port will be used on rank0 machine during rpc initialization and connection?
Also, is there any way to output detailed information from rpc during initialization so we can find more details here. Ideally we will be able to know what address/port rank0 is listening on and where other ranks are connecting to, whether there is some mismatch on the address or other things cause the rpc initialization hangs. |
st177925 | Unfortunately, I’m not familiar with it and don’t know a specific user who might be.
Maybe @maxiluk or @peterjc123 would know more. |
st177926 | frankdong:
Although I see an open PR for this (but we don’t know if rpc will be support in this PR):
Yep, MSFT experts are helping us adding Windows support to PT Distributed. The first step focuses on DDP only, but we do plan to cover all features in the distributed package in future releases.
So question is say rank0 is listening on one port 12560 for example, is this the only port will be used on rank0 machine during rpc initialization and connection?
I see. No, that port is only used for rendezvous during initialization. RPC backend will grab a port for each pair of workers, which are not visible to users.
cc @lcw any suggestion on how RPC can work with port forwarding? |
st177927 | mrshenli:
RPC backend will grab a port for each pair of workers, which are not visible to users.
Do we know the port range so we can have a test |
st177928 | I’m not super familiar with port forwarding on WSL, but I assume it does some sort of NAT, right? In that case, I suspect it would be enough if only the listening socket(s) on each server had a fixed port which is correctly forwarded? (as then all other sockets that are accepted will have a random port but the NAT would be aware of them and would handle them)
Unfortunately, at this moment, also the port of the listening socket(s) is picked at random (well, not at random, but we’re letting the kernel give us back an arbitrary available port). We don’t support a way for the user to specify a port. Which I think means there’s no real way for you to set up port forwarding before launching the application. |
st177929 | I am trying to add distributed training on my program as my model is relatively large. The program runs well without distributed training, however, when I add distributed training, the program freezes right at model = torch.nn.parallel.DistributedDataParallel(model) without returning any error messages. I am wondering have anyone else faced this situation before? Are there any possible solutions? Thanks! |
st177930 | xdwang0726:
model = torch.nn.parallel.DistributedDataParallel(model)
When doing the above without specifying a device_id, it will try to replicate the model to all visible devices in each process (unless the model is on CPU). Is this intentional? The recommended use of DDP is to let each process exclusively operate on one GPU.
Beside, before v1.7 DDP will create communication buckets. The total size of those buckets will be the same as model size. So the GPU memory size needs to be at least 3X of the model size. |
st177931 | Thanks for your reply! I have specified the cuda ids when training. When I trained in a single GPU, it returns the error message that ‘CUDA requires ~6G more memory’. I add one more 16G GPU to do the training. According to your suggestion, it seems like I need at least 4 GPUs in total? Thanks! |
st177932 | mrshenli:
the GPU memory size needs to be at least 3X of the model size.
Thanks for your reply! I have specified the cuda ids when training. When I trained in a single GPU, it returns the error message that ‘CUDA requires ~6G more memory’. I add one more 16G GPU to do the training. According to your suggestion, it seems like I need at least 4 GPUs in total? Thanks! |
st177933 | Hey @xdwang0726, do you mind sharing what was the cause of the problem, and how it was resolved? In case future users hit the same issue. |
st177934 | Before I was using local_rank = torch.distributed.get_rank() and the program freezes. I manually set local_rank=-1 that solves the problem. |
st177935 | Is it possible to train a model across multiple remote servers in my department? These servers are not connected to each other. I want to use GPUs of both the servers (with different IP addresses) so that I can train with larger batch size.
I have seen nn.DistributedDataParallel 7 but how do I mention the IP address of multiple servers? |
st177936 | Solved by mrshenli in post #2
What does this mean? Their IPs are not reachable from each other?
I have seen nn.DistributedDataParallel but how do I mention the IP address of multiple servers?
If they can reach each other through network, yes, DistributedDataParallel can work across multiple machines. You need to provide t… |
st177937 | motor_junkie:
These servers are not connected to each other.
What does this mean? Their IPs are not reachable from each other?
I have seen nn.DistributedDataParallel 8but how do I mention the IP address of multiple servers?
If they can reach each other through network, yes, DistributedDataParallel can work across multiple machines. You need to provide the master address and master IP for all peers to do rendezvous. See this example 21.
If you want to choose a specific network interface, you can configure the following two env vars (more details 3). You only need one of them depending on which backend you are using.
NCCL_SOCKET_IFNAME, for example export NCCL_SOCKET_IFNAME=eth0
GLOO_SOCKET_IFNAME, for example export GLOO_SOCKET_IFNAME=eth0 |
st177938 | I think I didn’t describe my situtation correctly. What I meant is that they are different systems. One has IP: a.b.c.d, and the other has IP: a.b.c.e.
Okay, thanks! I’ll try it out |
st177939 | I think I didn’t describe my situtation correctly. What I meant is that they are different systems. One has IP: a.b.c. d , and the other has IP: a.b.c. e .
I see. This should be fine. You only need to specify one of them as the master, and set MASTER_ADDR and MASTER_PORT for all peers to point to that master. This will allow all peers to do rendezvous, and the rendezvous process will create connections between pairs. |
st177940 | Edit: Mistaken!
This was my issue:
github.com/pytorch/pytorch
DistributedDataParallel: resume training from a checkpoint results in additional processes on GPU 0 45
opened
Jul 21, 2019
closed
Jul 21, 2019
qchenclaire
Hi,
When I was trying the imagenet example with DistributedDataParallel, using single node with 4 gpus, I found that when I add...
module: distributed
13%20PM1416×458 32.4 KB
Here’s a screenshot of distributed training in Pytorch when I call the train function like:
CUDA_VISIBLE_DEVICES=1,2,3,4 python -m torch.distributed.launch --nproc_per_node=4 train_new.py. You can see that the first rank has also initted 3 separate processes for each other GPU. When I use 10 GPUs on a box this severely limits the batch size, since the 0th dimension node has so much less capacity. What is it storing? I thought gradients in DDP were all-reduced. I’ve also tried turning broadcast_buffers to False to no avail.
Model is stacked modules of 1D-conv, relu, batch norm, LSTM, followed by a large softmax layer and CTC loss. Backend is NCCL
Pytorch 1.3.0, Cuda 10.1, Titan RTX, Ubuntu 18.04. Can provide more code upon request. |
st177941 | Discussion here 103 might be helpful.
This is likely due to some tensors/context is unintentionally created on the 1st GPU, e.g., when calling torch.cuda.empty_cache() without a device guard. Solutions would be either 1) carefully walking though libs/codes to make sure no states leaks to cuda:0, or 2) set CUDA_VISIBLE_DEVICES to let each process only see one GPU.The second approach might be easier. |
st177942 | @PCerles I’m having a similar issue. Were you able to resolve your problem? Thanks. |
st177943 | @PCerles @Felix_Kreuk
What @mrshenli mentioned could seamlessly happen when you load saved parameters without specifying map_location.
torch.load by default loads parameters to the device where they were, usually the rank 0 device.
load_state_dict then copies the loaded value from that device to the target device.
After the intermediate use, torch still occupies the GPU memory as cached memory.
I had a similar issue and solved it by directly loading parameters to the target device.
For example:
state_dict = torch.load(model_name, map_location=self.args.device)
self.load_state_dict(state_dict)
Full code here 55. |
st177944 | Hi.
I have a program using distributed data parallel of Pytorch.
It’s working well, but I do not know how to access data in the GPU process from main().
In particular, a GPU process, train(), produces a list of loss, and I want to plot it in the main() after returning from spawn(). However, I do not know how to access the list in train() on GPU from main() on CPU.
If I use a global variable, it should work, but it does not seem to be the best answer. I understand that printing loss can be done by gpu[0], and maybe even plotting the graph too. But, I want to do many tasks to analyze the results in main().
I appreciate any information or examples. Thank you. |
st177945 | I have checked communication using args and global variables.
Non of them works for transmitting data from a GPU process to main(), when I use DDP.
Usually, the arguments hold pointers to variables, and main() and called functions can share the same variables. But, when the processes are spawned in DDP, it seems that args are deep-copied and there is no common variables having the same ids between main() and the processes.
A global variable can be declared in the process, but it’s not shared with main().
Does everybody using DDP plot the loss charts in a spawned process on GPU?
I have no idea how to send the loss list from the process to main().
Please advise. |
st177946 | TT_YY:
I have checked communication using args and global variables.
Non of them works for transmitting data from a GPU process to main(), when I use DDP.
This is true, because Python global vars are per-process concept.
Does everybody using DDP plot the loss charts in a spawned process on GPU?
This can be done using torch.multiprocessing.SimpleQueue. E.g., let the main process create the queue, pass it to the child process, and then let the child process put the loss object to the queue. Then, the main process should be able to see that.
The test below can serve as an example:
github.com
pytorch/pytorch/blob/2c4b4aa81bc8dba8272e9c7190edcaa3e114ec15/test/test_multiprocessing.py#L580-L600
def test_event_multiprocess(self):
event = torch.cuda.Event(enable_timing=False, interprocess=True)
self.assertTrue(event.query())
ctx = mp.get_context('spawn')
p2c = ctx.SimpleQueue()
c2p = ctx.SimpleQueue()
p = ctx.Process(
target=TestMultiprocessing._test_event_multiprocess_child,
args=(event, p2c, c2p))
p.start()
c2p.get() # wait for until child process is ready
torch.cuda._sleep(50000000) # spin for about 50 ms
event.record()
p2c.put(0) # notify child event is recorded
self.assertFalse(event.query())
c2p.get() # wait for synchronization in child
self.assertTrue(event.query())
p.join() |
st177947 | I’m not sure if my case is what you want but I use GPU communication functions to plot loss graph during training.
I use a custom Loss class that inherits nn.modules.loss._Loss.
It calculates the loss and stores the record and plots the loss graph.
The loss values are synchronized inside the Loss class
Thus, I don’t have to send values to main() scope.
Here’s my public github code 4.
all_reduce function performs sync and it is called at the end of an epoch from trainer. |
st177948 | Thank you, Shen Li.
This is be what I was looking for. I will try the sample code and try to use SimpleQueue() in my program. I hope it works!
Thank you. |
st177949 | Thank you, seungjun
I appreciate your code.
I understand that all_reduce function transmits loss between GPUs.
I am trying to evaluate performance of optimizers by changing many factors such as batch size and learning rate. Therefore, I have multiple loops to change the variables outside the optimization loop. Also, I have to collect many kinds of data along with loss and accuracy to analyze the details of the optimizers and plot them.
I felt a kind of odd about performing such non-multiplication looping tasks and all plotting using expensive GPU and its memory. So, I tried to implement all the outer loops, analysis, and plotting tasks in main(), which requires getting the information from the spawned GPU processes.
Thank you. |
st177950 | Is there a way to verify if allreduce operation is getting called in a multinode DDP training with nccl backend ? In my training the results of single node and distributed training appear similar. @mrshenli @apaszke |
st177951 | One option is to use nvprof 1.
In my training the results of single node and distributed training appear similar.
You mean speed is similar? What is the batch size fed into each DDP instance? When using DDP, the batch_size should be updated to original_batch_size/world_size. |
st177952 | No. I have divided the batch size by the world size.
Will check out nvprof and also create minimal working example as I can not share code. |
st177953 | I have several questions about distributed.rpc package, hope someone could shed a light on them, thanks:
Looks like during rpc init(init_rpc), all nodes need to call init_rpc then init will be finished, otherwise all nodes will block waiting. Is there any reason for that behavior? In some scenario, when main node get initialized and some (not all) workers got initialized, the main node should able to send some work to workers and no need to wait for all workers to be initialized.
Does rpc support adds/removes nodes after all nodes got initialized?
What will happen if some node got disconnected after all nodes got initialized? Will the rest of nodes blocking waiting or other nodes will work as normal?
Does rpc have any fail recover built in or plan to introduce in future? |
st177954 | Hey @frankdong, thanks for the questions. Could you please share some more info about your use case? We would love to learn your requirements and use that as input for elastic RPC design.
Looks like during rpc init(init_rpc), all nodes need to call init_rpc then init will be finished, otherwise all nodes will block waiting. Is there any reason for that behavior? In some scenario, when main node get initialized and some (not all) workers got initialized, the main node should able to send some work to workers and no need to wait for all workers to be initialized.
This is a limitation of the current RPC implementation. The main reason is because early versions of RPC uses ProcessGroupGloo to do communication, which requires a rendezvous to initialize. We are working on adding elastic support (nodes can join/leave dynamically), and hopeful can address this problem. TensorPipe RPC backend is one step towards that, but the current version still has some dependency on ProcessGroupGloo.
cc @agolynski for elasticity
cc @lcw for TensorPipe
Does rpc support adds/removes nodes after all nodes got initialized?
Not yet, but this is in our roadmap.
What will happen if some node got disconnected after all nodes got initialized? Will the rest of nodes blocking waiting or other nodes will work as normal?
Does rpc have any fail recover built in or plan to introduce in future?
If the network failure is transient, it should be fine. It will retry internal control messages automatically. If the message contains user function and got lost during the network failure, the sender should see an timeout error.
If it’s permanent network failure or node failure, the current version cannot handle those. We do have plan to cover this gap. There are some relevant discussion here: https://github.com/pytorch/pytorch/issues/41425 4 |
st177955 | @mrshenli Thanks for your detailed answer.
Do we have rough timeline for elastic rpc?
Regarding our scenario: We are using rpc package as the distribute infrastructure. We have 1 scheduler node that is state-full and will split jobs and send out to workers. Workers are state-less and will receive job from scheduler and report result back to scheduler as soon as it finishes execution. Then scheduler will decide whether we have more jobs to send out and check if there is any idle workers and coordinate the whole execution process. So we will need our distribute infrastructure to be flexible enough to manage all distributed workers and handle node failure properly. |
st177956 | Hi,
I want to use data parallel to train my model on single GPU. I followed the example of Pytorch DISTRIBUTED DATA PARALLEL, and pass the same device_id to 4 processes. With all-reduce sync method, it runs even slower than using a single process. The interesting thing is by disabling all_reduce sync-up for gradients, there is a great speed up of training itself. So I think the GPU has extra compute capacity for multiple training process, but the bottleneck is all-reduce method. Any one know the reason for this bottleneck? Is there any other way to sync param gradients without using all_reduce? Thanks. |
st177957 | I want to use data parallel to train my model on single GPU. I followed the example of Pytorch DISTRIBUTED DATA PARALLEL, and pass the same device_id to 4 processes. With all-reduce sync method, it runs even slower than using a single process.
The all_reduce op expects each process to exclusively work on a different GPU. If the same GPU is shared across processes, it does not guarantee to work.
The interesting thing is by disabling all_reduce sync-up for gradients, there is a great speed up of training itself.
How did you disable that in DDP?
So I think the GPU has extra compute capacity for multiple training process, but the bottleneck is all-reduce method.
Looks like your model is small enough such that each op will just occupy a subset of resources in the GPU.
Is there any other way to sync param gradients without using all_reduce?
Since you just use one GPU, you can try using multi-processing. Say launch a main process and use torch.multiprocessing to spawn 4 subprocesses. Use torch.multiprocessing.SimpleQueue to pass grad tensors from sub-processes back to the main process, let the main process accumulate them, and then pass the result back to all subprocesses.
The test below can serve as a example for SimpleQueue:
github.com
pytorch/pytorch/blob/2c4b4aa81bc8dba8272e9c7190edcaa3e114ec15/test/test_multiprocessing.py#L580-L600 1
def test_event_multiprocess(self):
event = torch.cuda.Event(enable_timing=False, interprocess=True)
self.assertTrue(event.query())
ctx = mp.get_context('spawn')
p2c = ctx.SimpleQueue()
c2p = ctx.SimpleQueue()
p = ctx.Process(
target=TestMultiprocessing._test_event_multiprocess_child,
args=(event, p2c, c2p))
p.start()
c2p.get() # wait for until child process is ready
torch.cuda._sleep(50000000) # spin for about 50 ms
event.record()
p2c.put(0) # notify child event is recorded
self.assertFalse(event.query())
c2p.get() # wait for synchronization in child
self.assertTrue(event.query())
p.join() |
st177958 | Hi @mrshenli, thanks for reply. I was following this example to use a average_gradients function, which calls all_reduce:
So to disable all_reduce, I just didn’t call this function when doing the training. And I notice there are two different examples, the other one uses DDP model and the sync-up step is done in backward computation, but I want to have a customized sync-up method, that’s why I choose the way with a separate average_gradients function.
I have tried using SimpleQueue to pass some large data, but it seems slow when calling .get(), I’ll try to use it for passing grad tensors and see how it performs. |
st177959 | Hello folks!
I’m stuck with one very strange problem: I work with recently released Scene Graph Benchmark 1 and made it train on GQA, but I have one issue with that. It trains as expected when I use the following command: CUDA_VISIBLE_DEVICES=0,1,2,3 python tools/relation_train_net.py --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml" (it uses batch_size = 2). But when I decide to use another command with torch.distributed.launch (CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --master_port 10025 --nproc_per_node=1 tools/relation_train_net.py --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml") I’m getting RuntimeError: CUDA out of memory( it still has batch_size = 2). Initially I wanted to train it on 4 GPUs with batch_size = 8, but figured out about this problem. What can be the problem? And what should I do in order to properly train it on 4 GPUs?
My set up includes 4 2080ti so It has a plenty of memory. |
st177960 | nullkatar:
I’m stuck with one very strange problem: I work with recently released Scene Graph Benchmark and made it train on GQA, but I have one issue with that. It trains as expected when I use the following command: CUDA_VISIBLE_DEVICES=0,1,2,3 python tools/relation_train_net.py --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml" (it uses batch_size = 2 ). But when I decide to use another command with torch.distributed.launch ( CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --master_port 10025 --nproc_per_node=1 tools/relation_train_net.py --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml" ) I’m getting RuntimeError: CUDA out of memory ( it still has batch_size = 2 ). Initially I wanted to train it on 4 GPUs with batch_size = 8 , but figured out about this problem. What can be the problem? And what should I do in order to properly train it on 4 GPUs?
Hi Leon,
What version of pytorch do you use? |
st177961 | Could you try to run the DDP command on a single node and GPU and check the memory usage?
I guess the code might create unnecessary CUDA contexts on other devices, but since the repository contains a lot of files I haven’t looked through all of them. |
st177962 | I just change
https://github.com/ZPdesu/SEAN 7 to distributed training.
And I get a weird error.
It is no error,when I use one gpu.
But, It will stop without printing errors when I use multi gpus.
It will stop after Every 13 epoches by two gpus.
What could be the cause of such a problem?
my environments:
ubuntu: 16.04
gpu:nvidia-2080ti
cuda: 10.1 / 10.2
pytorch: 1.6.0 / 1.7.0
nccl: 2.4.8 / 2.7.6
python:3.6 / 3.7 |
st177963 | Hey @Feywell
By “change https://github.com/ZPdesu/SEAN distributed training”, which distributed training API are you referring to (e.g., DistributedDataParallel, c10d, RPC)?
Could you please share the code that uses distributed APIs? |
st177964 | I just use DistributedDataParallel like this:
if opt.distributed:
cudnn.benchmark = True
opt.device = "cuda"
torch.cuda.set_device(opt.local_rank)
torch.distributed.init_process_group(backend="nccl",
init_method="env://")
synchronize()
And model :
if opt.distributed:
self.pix2pix_model = torch.nn.parallel.DistributedDataParallel(self.pix2pix_model,
device_ids=[opt.local_rank],
output_device=opt.local_rank,
find_unused_parameters=True)
self.pix2pix_model_on_one_gpu = self.pix2pix_model.module |
st177965 | The initialization looks correct to me.
self.pix2pix_model_on_one_gpu = self.pix2pix_model.module
Question: why retrieving the local model from DDP model?
It will stop after Every 13 epoches by two gpus.
You mean the program crashes without any error message? How did you launch the two DDP processes? |
st177966 | This line just be used to save model:
github.com
ZPdesu/SEAN/blob/04c7536ff3fecd2d1a09c9ae046a1144636033a5/trainers/pix2pix_trainer.py#L23 1
updates the weights of the network while reporting losses
and the latest visuals to visualize the progress in training.
"""
def __init__(self, opt):
self.opt = opt
self.pix2pix_model = Pix2PixModel(opt)
if len(opt.gpu_ids) > 0:
self.pix2pix_model = DataParallelWithCallback(self.pix2pix_model,
device_ids=opt.gpu_ids)
self.pix2pix_model_on_one_gpu = self.pix2pix_model.module
else:
self.pix2pix_model_on_one_gpu = self.pix2pix_model
self.generated = None
if opt.isTrain:
self.optimizer_G, self.optimizer_D = \
self.pix2pix_model_on_one_gpu.create_optimizers(opt)
self.old_lr = opt.lr
def run_generator_one_step(self, data):
The program will crash in differenct epoches by different number’s gpus without error message.
But It is ok in one gpu.
I use pytorch launch function:
python -m torch.distributed.launch --nproc_per_node=$NGPUS train.py |
st177967 | I have an inference function func that takes on average 9 sec to run. But when I try to use multiprocessing to parallelize it (even using torch.multiprocessing) each inference takes on average 20 sec why is that ?
For info:
func is an inference function which takes in a patient_name and runs a torch model in inference on that patient’s data.
device = torch.device(torch.device('cpu'))
def func(patient_name):
data = np.load(my_dict[system_name]['data_path'])
model_state = torch.load(my_dict[system_name]['model_state_path'],map_location='cpu')
model = my_net(my_dict[system_name]['HPs'])
model = model.to(device)
model.load_state_dict(model_state)
model.eval()
result = model(torch.FloatTensor(data).to(device))
return result
from torch.multiprocessing import pool
core_cnt = 10
pool = Pool(core_cnt)
out = pool.starmap(func, pool_args) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.