id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st179068 | Hello,
In the ImageNet example linked here https://github.com/pytorch/examples/blob/master/imagenet/main.py, when we call the main_worker through mp.spawn, how is the main_worker getting the GPU argument? When I try to run this with 2 nodes that have 2 GPUs each, I always see this parameter to be None. It works for multiple nodes with single GPUs. |
st179069 | Solved by mrshenli in post #2
multiprocessing.spawn will feed the process id as the first argument to the target function. Here is the API doc: https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn |
st179070 | multiprocessing.spawn will feed the process id as the first argument to the target function. Here is the API doc: https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn 3 |
st179071 | I am getting the following error when I am trying to all reduce a list of tensor
RuntimeError: Tensors must be contiguous
Here is a snippet of code
t_ = [...] # list of tensors
for t in t_:
dist.all_reduce(t, , op=dist.ReduceOp.SUM, group=group) |
st179072 | Solved by babababadukeduke in post #2
Figure it out! I have to make the tensor contiguous before reducing it. This article provides a nice description of it. |
st179073 | Figure it out! I have to make the tensor contiguous before reducing it. This article 38 provides a nice description of it. |
st179074 | I am trying to do multi gpu training with DistributedDataParallel. I wrap it around my model. However my model has a custom function that now i call by doing model.module.function(x). I was wondering if this is ok and if something bad will happen. Thanks |
st179075 | Solved by mrshenli in post #10
Yes, I think the gradients should be fine. |
st179076 | What does this custom function do? and when do you call this custom function? If it does not modify parameters and the autograd graph built during the forward pass, it should be OK. |
st179077 | The pseudo code is something like this
output = model(input)
output2 = model(input2)
final_output = model.module.function(output, output2)
loss = loss_function(final_output)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Would this be fine? The custom function is just a MLP to classify something. It does not change anything, but I want it to get updated when I call my optimizer.step() |
st179078 | If model is a DistributedDataParallel (DDP) instance, this won’t work. Because setup some internal states at the end of the forward pass, and does not work if you call forward twice without a backward in between.
However, this can be easily solve by wrapping the two forward and the function invocation into a wrapper model, and then pass that wrapper model to DDP, sth like:
class WrapperModel(nn.Module):
def __init__(self, model) :
super(WrapperModel, self).__init__()
self.model = model
def forward(input, input2):
output = model(input)
output2 = model(input2)
final_output = model.module.function(output, output2)
return final_output
ddp = DistributedDataParallel(WrapperModel(model).to(device), device_ids=[device])
final_output = ddp.forward(input, input2)
loss = loss_function(final_output)
optimizer.zero_grad()
loss.backward()
optimizer.step() |
st179079 | I called broadcast_buffers=False so I didnt have an issue calling forward twice. In that case, is it fine if i call my custom function the way I did and will the gradients be correct? |
st179080 | If the model.module.function is not using the parameters in the model, it should work. |
st179081 | A little more details on my method. Pseudo code is
class model(nn.Module):
def __init__(self) :
super(model, self).__init__()
self.encoder = Encoder()
self.decoder = Decoder()
self.mlp = MLP()
def encode(self, x):
return self.encoder(x)
def decode(self, x):
return self.decoder(x)
def classify(self, a, b)
return self.mlp(a, b)
def forward(self, x):
enc = self.encode(x)
out = self.decode(enc)
return enc, out
# this is my main training script
enc, out = model(x)
enc2 = enc + d #d is some random perturbations
out2 = model.module.decode(enc2)
pred = model.module.classify(enc, enc2)
There are a bunch of other stuff, but in this scenario, my decode function is using the parameters in model? Would this be an issue? There are no errors when running. |
st179082 | how do yo compute the final loss (the one where backward is launched from)? I assume both end and out contribute to that loss? If so, this looks OK to me.
This should be an issue for your current use case, but I want to mention that this probably won’t work correctly with find_unused_parameters=True mode. Because the mlp is used outside of forward, and DDP will find unused parameters using forward output. So in that mode, DDP would treat parameters in mlp as unused parameters although they are actually part of the autograd graph. |
st179083 | my loss functions is something like
loss1 = adv_loss(out) #make output image look realistic
loss2 = adv_loss(out2)
loss3 = adv_loss(enc) #make encoding normal distributed
loss4 = adv_loss(enc2)
loss5 = l1_loss(out, x) # reconstruction loss
loss6 = l1_loss(out2, x)
loss7 = cross_entropy_loss(pred, GT)
I dont have find_unuse_parameters=True and have no error. If i understand what you are saying, the gradients are fine? |
st179084 | Dear @mrshenli,
I have successfully run the RRN RPC tutorial/example shown in the link below:
https://pytorch.org/tutorials/intermediate/rpc_tutorial.html 5
This is very helpful for me to understand the basics of the RPC framework and has demonstrated it very clear how to do distributed training with two machines (nodes).
However, my question is what if we have 3 or more nodes (workers) and we want to split the model into submodels on each machine/node/worker. How should I use the RPC framework to help to pass the intermediate result of the previous worker to the next worker?
Take the RNN tutorial/model as an example:
In the tutorial, basically it does this in the forward pass:
def forward(self, input, hidden):
# pass input to the remote embedding table and fetch emb tensor back
emb = _remote_method(EmbeddingTable.forward, self.emb_table_rref, input)
output, hidden = self.rnn(emb, hidden)
# pass output to the rremote decoder and get the decoded output back
decoded = _remote_method(Decoder.forward, self.decoder_rref, output)
return decoded, hidden
By using the rpc.remote and rpc_sync, I can have the EmbddingTable and do the forward pass remotely on the worker#1 and get the EmbddingTable’s result back locally.
Then I pass the EmbddingTable’s result to my local RNN (worker#0) and get the corresponding RNN result.
I have another Decoder remotely on worker#1 again, and I push the RNN result to that Decoder and then get the result back by using rpc_sync
However, what if I have three workers, worker#0 (local), worker#1 and worker#2. But this time, I put the RNN model on remote worker#2 and I want to have the communication like below:
From worker#0, I push the input to the EmbddingTable on worker#1. After worker#1 finishes the calculation, it passes the result to the RNN on worker#2.
The RNN on worker#2 calculates and passes the result back to worker#1 for the Decoder.
After the Decoder on worker#1 finishes the computation, I (on worker 0) use rpc_sync or to_here to get the final result back to local.
Would you think this is possible and let me know how to do this? Besides, can the Distributed Autograd and the Distributed Optimizer still be applied in this scenario?
One of my thoughts is that if I could make an RRef to each submodule’s output and pass them among the workers.
Thank you very much in advance for your time and help!
Best,
Ziyi |
st179085 | Hey @ZiyiZhu, thanks for trying out RPC.
Would you think this is possible and let me know how to do this?
One of my thoughts is that if I could make an RRef to each submodule’s output and pass them among the workers.
Solution I
Yes, this is possible, and using RRef is certainly one proper solution. More specifically, we can let worker 0 serve as a master here. Sth like
# On worker 0
emb_lookup_rref = rpc.remote("worker1", EmbeddingTable.forward, args=(input,))
# note that RNN.forward needs to call to_here() on emb_lookup_rref
rnn_result_rref = rpc.remote("worker2", RNN.forward, args=(emb_lookup_rref,))
# similarly Decoder also needs to call to_here() on rnn_result_rref
decoded_rref = rpc.remote("worker1", Decoder.forward, args=(rnn_result_rref,))
final_result = decoded_rref.to_here()
Above should work. Although it would result in several additional light-weight messages to manage internal RRef reference count, it shouldn’t slow down training. Because rpc.remote is non-blockng, it returns an RRef immediately. It’s very likely that the RNN module is already waiting on to_here() to get the embedding lookup result even before the EmbeddingTable finished processing the request. So that there shouldn’t be noticeable delay on the critical path.
Solution II
An alternative solution is to use nested RPC. You can wrap the EmbeddingTable -> RNN -> Decoder into one module forward function (Say MyModel.forward), and then let worker 0 to run rpc.rpc_sync("worker1", MyModel.forward, args=(input,)). Within MyModel.forward, it can also use rpc/remote to communicate with worker 2, sth like:
class MyModel(nn.Module):
def forward(self, input):
lookup_result = self.emb(input)
# here is directly pass the lookup result instead of RRef to RNN
rnn_result = rpc.rpc_sync("worker2", RNN.forward, inputs=(lookup_result))
return self.decoder(rnn_result) |
st179086 | ZiyiZhu:
Besides, can the Distributed Autograd and the Distributed Optimizer still be applied in this scenario?
Sorry I missed this. Yes, both of them would still work in this case, as long as you wrap the top-level (not nested) RPCs with distributed autograd context. All RPCs originated from that context will propagate the context id, so that autograd and optimizer on different workers will be able to find the context properly.
For Distributed Optimizer, as long as 1) you provide a correct list of param RRefs to its constructor 2) its step() function is wrapped by the correct dist autograd context, it should work. It does not care where those parameters live.
BTW, in the next release, we are making dist autograd/optimizer functional. They will take the context id as an argument and does not need to be wrapped by a with context statement anymore. |
st179087 | Hi @mrshenli,
Thank you very much for the solutions. I have tried the first solution quickly and it can work! I will try it with the Distributed Autograd and Optimizer once I construct the entire network for training. However, one of my concerns is that when we train the network, there are many iterations and epochs, which means we will have lots of forwarding passes of the following:
# On worker 0
emb_lookup_rref = rpc.remote("worker1", EmbeddingTable.forward, args=(input,))
# note that RNN.forward needs to call to_here() on emb_lookup_rref
rnn_result_rref = rpc.remote("worker2", RNN.forward, args=(emb_lookup_rref,))
# similarly Decoder also needs to call to_here() on rnn_result_rref
decoded_rref = rpc.remote("worker1", Decoder.forward, args=(rnn_result_rref,))
final_result = decoded_rref.to_here()
Tons of RRefs (emb_lookup_rref , rnn_result_rref , and decoded_rref ) will be created by the rpc.remote . Should I worry about this? Or these RRefs will be deconstructed automatically?
Thank you!! |
st179088 | ZiyiZhu:
Or these RRefs will be deconstructed automatically?
RPC will automatically track RRef reference count. This describes the algorithm. The object referenced by the RRef will be deleted automatically when the reference count drops to 0. So, they should be deleted automatically, and we saw it works correctly in intensive training applications. One thing I want to mention is that this relies on Python GC to delete vars like emb_lookup_rref in time though, which should be the case if there is no circular reference that points to the RRef. Let us know if you hit OOM due to RRef. We can probably expose the deletion APIs explicitly if necessary. |
st179089 | Dear @mrshenli,
I tested the RPC framework with two nodes for a model parallelism implementation. The distributed Autograd and Optimizer can work successfully as the way I constructed them following the template in the RPC tutorial https://pytorch.org/tutorials/intermediate/rpc_tutorial.html 1. However, I do see the memory problem in the GPU and the memory usage grows with the number of epochs. I wonder if you could let me know what could be the problem.
I constructed a very simple CNN for the classification of the FashionMNIST dataset. Then I divided it into two submodels, one for convolutional layers and the other for fully-connected layers as below:
class ConvNet(nn.Module):
def __init__(self, device):
super().__init__()
self.device = device
self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5).to(self.device)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5).to(self.device)
def forward(self, rref):
t = rref.to_here().to(self.device)
# conv 1
t = self.conv1(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# conv 2
t = self.conv2(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
return t.cpu()
class FCNet(nn.Module):
def __init__(self,device):
super().__init__()
self.device = device
self.fc1 = nn.Linear(in_features=12*4*4, out_features=120).to(self.device)
self.fc2 = nn.Linear(in_features=120, out_features=60).to(self.device)
self.out = nn.Linear(in_features=60, out_features=10).to(self.device)
def forward(self, rref):
t = rref.to_here().to(self.device)
# fc1
t = t.reshape(-1, 12*4*4)
t = self.fc1(t)
t = F.relu(t)
# fc2
t = self.fc2(t)
t = F.relu(t)
# output
t = self.out(t)
# don't need softmax here since we'll use cross-entropy as activation.
return t.cpu()
To wrap them up, I created another CNNModel class for the purpose and perform the forward pass:
class CNNModel(nn.Module):
def __init__(self, connet_wk, fcnet_wk, device):
super(CNNModel, self).__init__()
# setup embedding table remotely
self.device = device
self.convnet_rref = rpc.remote(connet_wk, ConvNet,args=(device,))
# setup LSTM locally
print(self.convnet_rref.to_here())
self.fcnet_rref = rpc.remote(fcnet_wk, FCNet,args=(device,))
print(self.fcnet_rref.to_here())
print('CNN model constructed: ' + 'owner')
def forward(self, inputreff):
convnet_forward_rref = rpc.remote(self.convnet_rref.owner(), _call_method, args=(ConvNet.forward, self.convnet_rref, inputreff))
fcnet_forward_rref = rpc.remote(self.fcnet_rref.owner(), _call_method, args=(FCNet.forward, self.fcnet_rref, convnet_forward_rref))
return fcnet_forward_rref
def parameter_rrefs(self):
remote_params = []
remote_params.extend(_remote_method(_parameter_rrefs, self.convnet_rref))
remote_params.extend(_remote_method(_parameter_rrefs, self.fcnet_rref))
return remote_params
For training, I have a trainer to do that using Distributed Autograd and Optimiser:
class Trainer(object):
def __init__(self, model, optimizer, train_loader, test_loader, device):
self.model = model
self.optimizer = optimizer
self.train_loader = train_loader
self.test_loader = test_loader
self.device = device
def fit(self, epochs):
for epoch in range(1, epochs + 1):
train_loss, train_acc = self.train()
test_loss, test_acc = self.evaluate()
print(
'Epoch: {}/{},'.format(epoch, epochs),
'train loss: {}, train acc: {},'.format(train_loss, train_acc),
'test loss: {}, test acc: {}.'.format(test_loss, test_acc),
)
def train(self):
train_loss = Average()
train_acc = Accuracy()
for data, target in self.train_loader:
with dist_autograd.context() as context_id:
data_ref = RRef(data)
output_ref = self.model(data_ref)
output = output_ref.to_here()
loss = F.cross_entropy(output, target)
dist_autograd.backward([loss])
self.optimizer.step()
train_loss.update(loss.item(), data.size(0))
train_acc.update(output, target)
return train_loss, train_acc
def evaluate(self):
self.model.eval()
test_loss = Average()
test_acc = Accuracy()
with torch.no_grad():
for data, target in self.test_loader:
with dist_autograd.context() as context_id:
data_ref = RRef(data)
output_ref = self.model(data_ref)
output = output_ref.to_here()
loss = F.cross_entropy(output, target)
test_loss.update(loss.item(), data.size(0))
test_acc.update(output, target)
return test_loss, test_acc
At the top level, I created a CNNModel, initialized the RPC framework and passed the Distributed Optimizer to the trainer:
**#Worker 0**
def run(args):
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
model = CNNModel(args['host'], args['worker'],device)
# setup distributed optimizer
opt = DistributedOptimizer(
optim.Adam,
model.parameter_rrefs(),
lr=args['lr'],
)
train_loader = MNISTDataLoader(args['root'], args['batch_size'], train=True)
test_loader = MNISTDataLoader(args['root'], args['batch_size'], train=False)
trainer = Trainer(model, opt, train_loader, test_loader, device)
trainer.fit(args['epochs'])
def main():
argv = {'world_size': int(2),
'rank': int(0),
'host': "worker0",
'worker': "worker1",
'epochs': int(10),
'lr': float(1e-3),
'root': 'data',
'batch_size': int(32)
}
print(argv)
rpc.init_rpc(argv['host'], rank=argv['rank'], world_size=argv['world_size'])
print('Start Run', argv['rank'])
run(argv)
rpc.shutdown()
os.environ['MASTER_ADDR'] = '10.142.0.13'#Google Cloud
#os.environ['MASTER_ADDR'] = 'localhost' #local
os.environ['MASTER_PORT'] = '29505'
main()
**#Worker 1**
def main():
argv = {'world_size': int(2),
'rank': int(1),
'host': 'worker0',
'worker': 'worker1',
'epochs': int(10),
'lr': float(1e-3),
'root': 'data',
'batch_size': int(32)
}
print(argv)
rpc.init_rpc(argv['worker'], rank=argv['rank'], world_size=argv['world_size'])
print('Start Run', argv['rank'])
rpc.shutdown()
os.environ['MASTER_ADDR'] = '10.142.0.13'#Google Cloud
#os.environ['MASTER_ADDR'] = 'localhost' #local
os.environ['MASTER_PORT'] = '29505'
main()
The dataloader is as the following:
from torch.utils import data
from torchvision import datasets, transforms
class MNISTDataLoader(data.DataLoader):
def __init__(self, root, batch_size, train=True):
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
])
dataset = datasets.FashionMNIST(root, train=train, transform=transform, download=True)
super(MNISTDataLoader, self).__init__(
dataset,
batch_size=batch_size,
shuffle=train,
)
I showed all the details above but I guess the problem could be the way I constructed the CNNModel for the ConvNet and FCNet. I wonder if you could take a look at the code and give some hints on where could be the problems?
Thank you very much for your time!
Best,
Ziyi |
st179090 | How fast does the memory usage increase? Does it keeps increasing after every epoch or stabilized after a few epoches?
It could be due to RRef or distributed autograd context wasn’t deleted in time. It might worth provide an API to block waiting for all RPC workers to clear RRefs and dist autograd contexts. cc @pritamdamania87 |
st179091 | Sorry, what I wrote in the previous post was not clear. The GPU usage keeps growing while you are training not necessarily having a relation to the epochs. I took a closer look at the GPU memory usage. On worker #1, I kept using
nvidia-smi
while the program is running and the memory usage kept increasing. The original GPU memory usage should be small and around 500MB. But if I use the RPC it will keep growing, and every second I typed nvidia-smi and I can see a few MB increased and if it is between epochs, I can also see a big jump of increment of memory usage.
Best,
Ziyi |
st179092 | Looks like there is a leak.
For RRef, there is an internal API _rref_context_get_debug_info to check the number of living OwerRRefs (with key num_owner_rrefs). Here are examples in tests 2.
Similarly, distributed autograd can also show the number of living backward passes. [example 2]
BTW, which version of PyTorch are you using? I recall we fixed some memory leaks after v1.4 release, e.g., this PR. |
st179093 | Hi @mrshenli,
I am using Pytorch 1.4.0 installed by the following command on the official website:
image1029×390 20.1 KB
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
Sure, I will check the posts and see if I can figure out the problem by myself. Please let me know if the version is a problem, and it is much appreciated if you could share more thoughts on the code I provided above.
Thank you!
Best,
Ziyi |
st179094 | Please let me know if the version is a problem
Yes, the memory leak could be fixed by new PRs landed after v1.4 (v1.4 branch cut was 12/08/2019 IIRC). Can you try the master branch, or the nightly (conda install pytorch torchvision -c pytorch-nightly), or if you can share a self-runnable script, we can try that too.
if you could share more thoughts on the code I provided above.
The code you share above looks correct to me. |
st179095 | Hi @mrshenli,
Yes, I just tested my code with the PyTorch-nightly version and it does not have the memory leakage issue anymore. Also, the syntax is more similar to what you have shown in the tutorial.
Thank you very much for your help!
Best,
Ziyi |
st179096 | Hi @mrshenli,
Sorry, my test back then was okay. However, I just created a new instance on Google Cloud and install the lastest Pytorch-nightly then it raises up this error during training:
image1016×653 48.4 KB
I am here to provide the code for the RPC training. Please take a look if you would have time!
GitHub
ZiyiZhu/RPC_FashionMNIST 2
RPC_FashionMNIST. Contribute to ZiyiZhu/RPC_FashionMNIST development by creating an account on GitHub.
Thank you. |
st179097 | IIUC, this is caused by a race recently introduced in a recent PR. @pritamdamania87 has a fix 1 for that. Btw, this bug is not in v1.5, if you can install v1.5 RC or wait for that fix to land in nightly, this error should disappear. |
st179098 | Thank you very much for your information and response. Sure I will wait for the fix and the new release.
Best,
Ziyi |
st179099 | Hi everyone. I’m gonna to use Nvidia Apex package to fast train my model with the help of auto mixed-precision. However even if the the loss continues to drop, the model inference dose not achieve improvement. My training code is as follows:
import os
import argparse
import time
import tqdm
import cv2
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.distributed as dist
from config.config import GetConfig, COCOSourceConfig, TrainingOpt
from data.mydataset import MyDataset
from torch.utils.data import DataLoader
from models.posenet import Network
from models.loss_model import MultiTaskLoss
import warnings
try:
from apex.parallel import DistributedDataParallel as DDP
from apex.fp16_utils import *
from apex import amp
from apex.multi_tensor_apply import multi_tensor_applier
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to run this example.")
warnings.filterwarnings("ignore")
parser = argparse.ArgumentParser(description='PoseNet Training')
parser.add_argument('--resume', '-r', action='store_true', default=True, help='resume from checkpoint')
parser.add_argument('--checkpoint_path', '-p', default='link2checkpoints_distributed', help='save path')
parser.add_argument('--max_grad_norm', default=5, type=float,
help=("If the norm of the gradient vector exceeds this, "
"re-normalize it to have the norm equal to max_grad_norm"))
# FOR DISTRIBUTED: Parse for the local_rank argument, which will be supplied automatically by torch.distributed.launch.
parser.add_argument("--local_rank", default=0, type=int)
parser.add_argument('--opt-level', type=str, default='O1')
parser.add_argument('--sync_bn', action='store_true', default=False, help='enabling apex sync BN.')
parser.add_argument('--keep-batchnorm-fp32', type=str, default=None)
parser.add_argument('--loss-scale', type=str, default=None)
parser.add_argument('--print-freq', '-f', default=10, type=int, metavar='N', help='print frequency (default: 10)')
torch.backends.cudnn.benchmark = True
use_cuda = torch.cuda.is_available()
args = parser.parse_args()
checkpoint_path = args.checkpoint_path
opt = TrainingOpt()
config = GetConfig(opt.config_name)
soureconfig = COCOSourceConfig(opt.hdf5_train_data)
train_data = MyDataset(config, soureconfig, shuffle=False, augment=True) # shuffle in data loader
soureconfig_val = COCOSourceConfig(opt.hdf5_val_data)
val_data = MyDataset(config, soureconfig_val, shuffle=False, augment=True) # shuffle in data loader
best_loss = float('inf')
start_epoch = 0
args.distributed = False
if 'WORLD_SIZE' in os.environ:
args.distributed = int(os.environ['WORLD_SIZE']) > 1
args.gpu = 0
args.world_size = 1
# FOR DISTRIBUTED: If we are running under torch.distributed.launch,
# the 'WORLD_SIZE' environment variable will also be set automatically.
if args.distributed:
args.gpu = args.local_rank
torch.cuda.set_device(args.gpu)
# Initializes the distributed backend which will take care of synchronizing nodes/GPUs
torch.distributed.init_process_group(backend='nccl', init_method='env://')
args.world_size = torch.distributed.get_world_size() # 获取分布式训练的进程数
assert torch.backends.cudnn.enabled, "Amp requires cudnn backend to be enabled."
posenet = Network(opt, config, dist=True, bn=False)
# Actual working batch size on multi-GPUs is 4 times bigger than that on one GPU
# fixme: add up momentum if the batch grows?
optimizer = optim.SGD(posenet.parameters(), lr=opt.learning_rate * args.world_size, momentum=0.9, weight_decay=1e-4)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.2, last_epoch=-1)
if args.sync_bn:
# This should be done before model = DDP(model, delay_allreduce=True),
# because DDP needs to see the finalized model parameters
# We rely on torch distributed for synchronization between processes. Only DDP support the apex sync_bn now.
import apex
print("Using apex synced BN.")
posenet = apex.parallel.convert_syncbn_model(posenet)
posenet.cuda()
# Initialize Amp. Amp accepts either values or strings for the optional override arguments,
# for convenient interoperation with argparse.
# For distributed training, wrap the model with apex.parallel.DistributedDataParallel.
# This must be done AFTER the call to amp.initialize.
model, optimizer = amp.initialize(posenet, optimizer,
opt_level=args.opt_level,
keep_batchnorm_fp32=args.keep_batchnorm_fp32,
loss_scale=args.loss_scale) # Dynamic loss scaling is used by default.
# delay_allreduce delays all communication to the end of the backward pass.
if args.distributed:
# By default, apex.parallel.DistributedDataParallel overlaps communication with computation in the backward pass.
# model = DDP(model)
# delay_allreduce delays all communication to the end of the backward pass.
model = DDP(model, delay_allreduce=True)
train_sampler = None
val_sampler = None
# Restricts data loading to a subset of the dataset exclusive to the current process
# Create DistributedSampler to handle distributing the dataset across nodes when training
# This can only be called after distributed.init_process_group is called
if args.distributed:
train_sampler = torch.utils.data.distributed.DistributedSampler(train_data)
val_sampler = torch.utils.data.distributed.DistributedSampler(val_data)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=opt.batch_size, shuffle=(train_sampler is None),
num_workers=16, pin_memory=True, sampler=train_sampler, drop_last=True)
val_loader = torch.utils.data.DataLoader(val_data, batch_size=opt.batch_size, shuffle=False,
num_workers=4, pin_memory=True, sampler=val_sampler, drop_last=True)
for param in model.parameters():
if param.requires_grad:
print('Parameters of network: Autograd')
break
# Update the learning rate for start_epoch times
for i in range(start_epoch):
scheduler.step()
def train(epoch):
print('\n ############################# Train phase, Epoch: {} #############################'.format(epoch))
posenet.train()
if args.distributed:
train_sampler.set_epoch(epoch)
# train_loss = 0
scheduler.step()
print('\nLearning rate at this epoch is: %0.9f\n' % optimizer.param_groups[0]['lr']) # scheduler.get_lr()[0]
batch_time = AverageMeter()
losses = AverageMeter()
end = time.time()
for batch_idx, target_tuple in enumerate(train_loader):
# images.requires_grad_()
# loc_targets.requires_grad_()
# conf_targets.requires_grad_()
if use_cuda:
target_tuple = [target_tensor.cuda(non_blocking=True) for target_tensor in target_tuple]
# target tensor shape: [8,512,512,3], [8, 1, 128,128], [8,43,128,128], [8,36,128,128], [8,36,128,128]
images, mask_misses, heatmaps = target_tuple # , offsets, mask_offsets
# images = Variable(images)
# loc_targets = Variable(loc_targets)
# conf_targets = Variable(conf_targets)
loss = model(images, target_tuple[1:])
optimizer.zero_grad() # zero the gradient buff
if loss.item() > 1e6:
print("\nLoss is abnormal, drop this batch !")
loss.zero_()
continue
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
torch.nn.utils.clip_grad_norm(model.parameters(), args.max_grad_norm)
optimizer.step()
if batch_idx % args.print_freq == 0:
# Every print_freq iterations, check the loss, accuracy, and speed.
# For best performance, it doesn't make sense to print these metrics every
# iteration, since they incur an allreduce and some host<->device syncs.
if args.distributed:
# We manually reduce and average the metrics across processes. In-place reduce tensor.
reduced_loss = reduce_tensor(loss.data)
else:
reduced_loss = loss.data
# to_python_float incurs a host<->device sync
losses.update(to_python_float(reduced_loss), images.size(0)) # update needs average and number
torch.cuda.synchronize()
batch_time.update((time.time() - end) / args.print_freq)
end = time.time()
if args.local_rank == 0: # Print them in the Process 0
print('==================> Epoch: [{0}][{1}/{2}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Speed {3:.3f} ({4:.3f})\t'
'Loss {loss.val:.10f} ({loss.avg:.4f}) <================ \t'.format(
epoch, batch_idx, len(train_loader),
args.world_size * opt.batch_size / batch_time.val,
args.world_size * opt.batch_size / batch_time.avg,
batch_time=batch_time,
loss=losses))
global best_loss
# train_loss /= (len(train_loader)) # Each GPU process can only see 1/(world_size) training samples per epoch
if args.local_rank == 0:
# Write the log file each epoch.
os.makedirs(checkpoint_path, exist_ok=True)
logger = open(os.path.join('./' + checkpoint_path, 'log'), 'a+')
logger.write('\nEpoch {}\ttrain_loss: {}'.format(epoch, losses.avg))
logger.flush()
logger.close()
if losses.avg < best_loss:
# Update the best_loss if the average loss drops
best_loss = losses.avg
print('Saving model checkpoint...')
state = {
# not posenet.state_dict(). then, we don't ge the "module" string to begin with
'weights': model.module.state_dict(),
'optimizer_weight': optimizer.state_dict(),
'train_loss': losses.avg,
'epoch': epoch
}
torch.save(state, './' + checkpoint_path + '/PoseNet_' + str(epoch) + '_epoch.pth')
def test(epoch):
print('\n ############################# Test phase, Epoch: {} #############################'.format(epoch))
posenet.eval()
if args.distributed:
train_sampler.set_epoch(epoch)
batch_time = AverageMeter()
losses = AverageMeter()
end = time.time()
for batch_idx, target_tuple in enumerate(val_loader):
# images.requires_grad_()
# loc_targets.requires_grad_()
# conf_targets.requires_grad_()
if use_cuda:
target_tuple = [target_tensor.cuda(non_blocking=True) for target_tensor in target_tuple]
# target tensor shape: [8,512,512,3], [8, 1, 128,128], [8,43,128,128], [8,36,128,128], [8,36,128,128]
images, mask_misses, heatmaps = target_tuple # , offsets, mask_offsets
with torch.no_grad():
_, loss = model(images, target_tuple[1:])
if args.distributed:
# We manually reduce and average the metrics across processes. In-place reduce tensor.
reduced_loss = reduce_tensor(loss.data)
else:
reduced_loss = loss.data
# to_python_float incurs a host<->device sync
losses.update(to_python_float(reduced_loss), images.size(0)) # update needs average and number
torch.cuda.synchronize()
batch_time.update((time.time() - end))
end = time.time()
if args.local_rank == 0: # Print them in the Process 0
print('==================>Test: [{0}/{1}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Speed {2:.3f} ({3:.3f})\t'
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'.format(
batch_idx, len(val_loader),
args.world_size * opt.batch_size / batch_time.val,
args.world_size * opt.batch_size / batch_time.avg,
batch_time=batch_time, loss=losses))
if args.local_rank == 0: # Print them in the Process 0
# Write the log file each epoch.
os.makedirs(checkpoint_path, exist_ok=True)
logger = open(os.path.join('./' + checkpoint_path, 'log'), 'a+')
logger.write('\tval_loss: {}'.format(losses.avg))
logger.flush()
logger.close()
def adjust_learning_rate(optimizer, epoch, step, len_epoch):
"""LR schedule that should yield 76% converged accuracy with batch size 256"""
factor = epoch // 30
if epoch >= 80:
factor = factor + 1
lr = args.lr*(0.1**factor)
"""Warmup"""
if epoch < 5:
lr = lr*float(1 + step + epoch*len_epoch)/(5.*len_epoch) # len_epoch=len(train_loader)
# if(args.local_rank == 0):
# print("epoch = {}, step = {}, lr = {}".format(epoch, step, lr))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def reduce_tensor(tensor):
# Reduces the tensor data across all machines
# If we print the tensor, we can get:
# tensor(334.4330, device='cuda:1') *********************, here is cuda: cuda:1
# tensor(359.1895, device='cuda:3') *********************, here is cuda: cuda:3
# tensor(263.3543, device='cuda:2') *********************, here is cuda: cuda:2
# tensor(340.1970, device='cuda:0') *********************, here is cuda: cuda:0
rt = tensor.clone() # The function operates in-place.
dist.all_reduce(rt, op=dist.reduce_op.SUM)
rt /= args.world_size
return rt
if __name__ == '__main__':
for epoch in range(start_epoch, start_epoch + 80):
train(epoch)
test(epoch) |
st179100 | To be more specific, I have followed the ImageNet example in Nvidia Apex. I write the loss function inside my Network module which is like the follows:
class Network(torch.nn.Module):
"""
Wrap the network module as well as the loss module on all GPUs to balance the computation among GPUs.
"""
def __init__(self, opt, config, bn=False, dist=False):
super(Network, self).__init__()
self.posenet = PoseNet(opt.nstack, opt.hourglass_inp_dim, config.num_layers, bn=bn)
# If we use train_parallel, we implement the parallel loss . And if we use train_distributed,
# we should use single process loss because each process on these 4 GPUs is independent
self.criterion = MultiTaskLoss(opt, config) if dist else MultiTaskLossParallel(opt, config)
def forward(self, inp_imgs, target_tuple):
# Batch will be divided and Parallel Model will call this forward on every GPU
output_tuple = self.posenet(inp_imgs)
loss = self.criterion(output_tuple, target_tuple)
if not self.training:
# output will be concatenated along batch channel automatically after the parallel model return
return output_tuple, loss
else:
# output will be concatenated along batch channel automatically after the parallel model return
return loss
The training loss seems normal:
Epoch 0
train_loss: 589.6713480631511
val_loss: 536.4533081054688
Epoch 1
train_loss: 446.2322041829427
val_loss: 440.89935302734375
Epoch 2
train_loss: 436.07487325032554
val_loss: 433.20953369140625
Epoch 3
train_loss: 433.3325126139323
val_loss: 396.94744873046875
Epoch 4
train_loss: 425.1072373453776
val_loss: 406.3310546875
Epoch 5
train_loss: 418.57773783365883
val_loss: 392.5045166015625
Epoch 6
train_loss: 409.60796936035155
val_loss: 419.2001037597656
Epoch 7
train_loss: 410.79097737630207
val_loss: 409.8291320800781
Epoch 8
train_loss: 404.4842706298828
val_loss: 407.05352783203125
Epoch 9
train_loss: 399.4785394287109
val_loss: 388.7215881347656
Epoch 10
train_loss: 389.387607421875
val_loss: 379.6018981933594
Epoch 11
train_loss: 386.5943516031901
val_loss: 397.2137451171875
Epoch 12
train_loss: 382.25890686035154
val_loss: 376.7177734375
Epoch 13
train_loss: 387.2037613932292
val_loss: 360.4934387207031
Epoch 14
train_loss: 379.99100199381513
val_loss: 377.1543884277344
Epoch 15
train_loss: 381.0046073404948
val_loss: 378.36041259765625
Epoch 16
train_loss: 378.6185076904297
val_loss: 365.29205322265625
Epoch 17
train_loss: 380.5766967773437
val_loss: 364.39569091796875
Epoch 18
train_loss: 382.2865834554037
val_loss: 368.50152587890625
But the model seems not to have been trained well and the prediction results refuse to get better (which is bad actually).
I have struggled with this problem for a while. If I don’t use distributed training or Apex auto mixed-precision and I only wrap my Network module with torch.nn.parallel.DataParallel, everything goes fine and the prediction is good. |
st179101 | Numerical issues are notoriously hard to debug.
Can you isolate the issue to either distributed or mixed precision?
@mcarilli Any ideas? |
st179102 | This may well be an Apex bug. About a week ago, for a few days the combination of dynamic loss scaling + Apex DDP was broken in Apex master. I fixed it in https://github.com/NVIDIA/apex/commit/8437d29505fcc7fad28183395abd89a09a17efe6 13, so maybe a fresh clone + reinstall of Apex will resolve the issue. Be sure to clean the old install before rebuilding:
pip uninstall apex
cd apex_repo_dir
rm -rf build (if present)
rm -rf apex.egg-info (if present)
git pull
pip install -v --no-cache-dir --global-option="–cpp_ext" --global-option="–cuda_ext" . |
st179103 | Thank you for your reply. The problem has not yet solved.
If I remove the clip_norm in the training step, the gradient will explode after some batches. The training process looks like okay before explosion. No norm operation is used in my case. All input tensors and ground truth tensors are normalized into [0,1]. L2 loss and weight_decay are used. I have no idea which detail should I concentrate on. |
st179104 | Did you try a fresh clone and install of Apex?
Gradient clipping does require special treatment for compatibility with all opt_levels: https://nvidia.github.io/apex/advanced.html#gradient-clipping 17 |
st179105 | Yes, I have followed your instruction to reinstall Apex. My problem is strange. I printed the abnormal value during the distributed training:
First, my model has various scales of feature map predicted (cascaded CNN and in each stage/stack has 5 scale output)
This is normal for some batches, and the element-wise output should in the range of [0,1] (gaussian heatmap regression)
heatmap L2 loss per stack......... [ 69848.01 59730.246 223546.12 60869.35 ]
heatmap L2 loss per stack......... [15058.608 13271.5 13770.041 13684.25 ]
heatmap L2 loss per stack......... [3515.2559 3062.7026 2899.563 2879.3105]
heatmap L2 loss per stack......... [ 84485.94 76283.11 219553.47 77723.48]
heatmap L2 loss per stack......... [ 70769.086 63346.633 209632.16 64268.496]
heatmap L2 loss per stack......... [18312.457 17451.66 17986.875 17935.975]
However, the loss is abnormal suddenly , and the elements of the output become very large such as 223, and then grow rapidly and resulting in gradient explosion.
Dangerous! Check pred, gt, mask_miss: ======> tensor(223.1250, device='cuda:2', dtype=torch.float16, grad_fn=<MaxBackward1>) tensor(1., device='cuda:2') tensor(1., device='cuda:2')
heatmap L2 loss per stack......... [0. 0. 0. 0.]
Dangerous! Check pred, gt, mask_miss: ======> tensor(223.1250, device='cuda:1', dtype=torch.float16, grad_fn=<MaxBackward1>) tensor(1., device='cuda:1') tensor(1., device='cuda:1')
Dangerous! Check pred, gt, mask_miss: ======> tensor(222.7500, device='cuda:3', dtype=torch.float16, grad_fn=<MaxBackward1>) tensor(1., device='cuda:3') tensor(1., device='cuda:3')
Dangerous! Check pred, gt, mask_miss: ======> tensor(222.7500, device='cuda:0', dtype=torch.float16, grad_fn=<MaxBackward1>) tensor(1., device='cuda:0') tensor(1., device='cuda:0') |
st179106 | Update. Problem has been solved. I add clamp into loss value and change the weight of multi-scale losses. It seems that the auto loss scale in Apex is not perfect yet. |
st179107 | Hello all,
I am trying to implement Distributed Parallel for my model and I followed the ImageNet example for this. I am pretty new to distributed programming, so I am not sure about a bunch of things. When I use torch.multiprocess.spawn with join=True, there is no output that is printed. When I change the join to False, I get the following error below.
<torch.multiprocessing.spawn.SpawnContext object at 0x2b49eee8a3c8>
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.5/multiprocessing/spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "/usr/lib/python3.5/multiprocessing/spawn.py", line 116, in _main
self = pickle.load(from_parent)
File "/usr/lib/python3.5/multiprocessing/synchronize.py", line 111, in __setstate__
self._semlock = _multiprocessing.SemLock._rebuild(*state)
FileNotFoundError: [Errno 2] No such file or directory
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.5/multiprocessing/spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "/usr/lib/python3.5/multiprocessing/spawn.py", line 116, in _main
self = pickle.load(from_parent)
File "/usr/lib/python3.5/multiprocessing/synchronize.py", line 111, in __setstate__
self._semlock = _multiprocessing.SemLock._rebuild(*state)
FileNotFoundError: [Errno 2] No such file or directory
I am submitting this job through a SLURM script that I have put below as well.
#!/bin/sh
#SBATCH --ntasks=4
#SBATCH --time=60:00:00
#SBATCH --partition=gpu
#SBATCH --mem=64gb
#SBATCH --nodes=2
#SBATCH --gres=gpu:2
#SBATCH --constraint=gpu_32gb
#SBATCH --job-name=test
#SBATCH --output=.../out_files/test.out
export PYTHONPATH=$WORK/tf-gpu-pkgs
module load singularity
singularity exec docker://<user>/pytorch-opencv:latest python3 -u $@ --use_adam=1 --multiprocessing_distributed --benchmarks=0 --benchmark_arch='vgg19' --batch_size=128 --test=0 --transfer=0 --dataset='<dataset-here>'
My code is like the ImageNet example and I am not sure what I am doing wrong.
Thank you, |
st179108 | Are you capturing the SpawnContext object returned by the call to torch.multiprocess.spawn? This SpawnContext is returned only when join=False, and must be saved for the spawned processes to coordinate IPC. If you allow the object to be destructed, you will see this error.
Here is a GitHub issue with some more information: https://github.com/pytorch/pytorch/issues/30461 21 |
st179109 | Hello Omkar,
Thank you for replying. The weird issue is that I don’t see the terminated print statement when I use join=True. With the issue that you linked to me, when I spawn the process, shouldn’t I be seeing the print statements from my main_worker function before I hit the terminated print statement? I apologize if this question isn’t framed right. I am new to distributed and don’t understand the system that well.
ctx = mp.spawn(main_worker, nprocs=ngpus_per_node,
args=(ngpus_per_node, args), join=False)
time.sleep(3)
print('terminated')
ctx.join()
else:
# Simply call main_worker function
main_worker(args.gpu, ngpus_per_node, args)
def main_worker(gpu, ngpus_per_node, args):
global best_acc1
print(gpu)
args.gpu = gpu
diff --git a/repro_org.py b/repro.py
index be44c3d..e971db4 100644
--- a/repro_org.py
+++ b/repro.py
@@ -6,7 +6,8 @@ def worker(nproc, arg1, arg2, arg3):
test = True
if __name__ == '__main__':
- mp.spawn(worker, (None, None, None), nprocs=1, join=False)
+ ctx = mp.spawn(worker, (None, None, None), nprocs=1, join=False)
time.sleep(3)
print('terminated')
+ ctx.join() |
st179110 | I was looking into training machine learning models in multiple cores. To be more clear, suppose I have “N” machine learning units (for eg. three layered neural network [in-hid-out] ). Each of the units are identical to each other. However, I want to train each network with different input of same nature (for eg. If I have 10 machine learning units with MNIST data as input, each of the 10 units will be trained on different sets of data). You can think of it as training the networks with MNIST in 10 geographically dispersed location where we are not sure which network will have what set of inputs. However, at somepoint I want to communicate between the machine learning models while updating the weights. I want to distribute same weights to all the models. For people who know federated learning, its like applying federated learning in multiple CPU/GPU cores.
Is it possible to do something like this in multiple cores of a CPU or GPU? Or is there any documentation that you can provide me?
@ptrblck @rasbt |
st179111 | Solved by mrshenli in post #2
IIUC, this is typical DistributedDataParallel training? If so, yes, PyTorch natively support that. Here is another tutorial. |
st179112 | IIUC, this is typical DistributedDataParallel 5 training? If so, yes, PyTorch natively support that. Here is another tutorial 9. |
st179113 | I am not sure what to call this. But its kind of distributed training where each of the neural network in different processes communicate while updating the weight. Would the approaches you provided do the same? |
st179114 | P.S. I don’t want to distribute a single set of input to multiple nodes/processes. Suppose I have 2 nodes N1 and N2, then I need to send a set of input for N1 and another set of input for N2, which is different than the set for N1 (and not collected in batches from a common data set). I am not sure if I explained it correctly. Sorry about that. |
st179115 | But its kind of distributed training where each of the neural network in different processes communicate while updating the weight
Does it have to communicate parameters instead of gradients? If all you need is to keep the parameters on all processes in sync, communicating gradients should be sufficient I think.
I don’t want to distribute a single set of input to multiple nodes/processes.
Yep, DDP does not split inputs, instead each process need to prepare its own inputs.
One question is, does the parameters/gradients communication occur in a synchronized or asynchronized fashion? “Synchronized” means all processes communicate at exactly the same time, while asynchronized can be something like gossip. DDP only works for synchronized use cases. If you need asynchronized communication, you can directly use c10d (allreduce, allgather, broadcast, etc.) and create multiple sub-groups to perform communicatioin. |
st179116 | distributed980×953 97.5 KB
I am not a Machine Learning savvy, so please mind the errors while I write this:
I think the figure will explain this right. Since I will be gathering the weights of all the networks residing at different processes, I need to pass the weight parameters right? And the weight gathering needs to happen asynchronously, at different point of time. But to get a start, at this point, we can assume that the weight gathering happens synchronously, at the same point of time.
Furthermore, the gathered weights need to be averaged at the root process and the aggregated weight should be broadcasted again to the networks for next epoch. I am not sure if you get this due to my weird way of explaining things but thanks for being modest and replying promptly.
If you could tell me a specific way to handle this, I could narrow down my scope of researching the documents and tutorials. Thank you again. |
st179117 | P.S. each of the networks will have local training epochs (hence update the weights in different period of time) |
st179118 | Based on the diagram and explanations, it seems like you are trying to train the network, with each node training on its own data, and ensuring that the parameters stay in sync across all the nodes. If this is the case, you will want to communicate the gradients, and not the weights.
DDP works perfectly for synchronous distributed training. Each node will independently perform the forward and backward pass on its own batch of data. Then, each node will send its computed gradients to every other node. Once each node has the gradients for all other nodes, they will independently average all the gradients and run the optimizer to perform the gradient update. One note is that there is no “root” process responsible for aggregating the gradients (what you’re describing is similar to a parameter server). In DDP, the nodes communicate with each to exchange gradients.
For asynchronous distributed training, you can use the c10d communication primitives as @mrshenli described above. |
st179119 | Did some googling and found very few discussions on this matter. Is it best to perform all-reduce on, say, loss values, and keep track of it within the process with rank 0, like what the official tutorial recommends for checkpoints? |
st179120 | That seems to be the case, check out how they do it in the implementation of Mask-RCNN 101. They use reduce instead of all_reduce because, for logging, you only need the reduced and averaged values on the rank 0 process. |
st179121 | Referring to this old issue https://github.com/pytorch/pytorch/issues/14528 13, which was closed, I need to do communications (all_reduce/reduce/broadcast) in two or more different groups simultaneously. For example, if a process’ rank belongs to say the group g0, it could participate in that communication only.
Essentially, referring to https://github.com/pytorch/pytorch/issues/14528 13, I would need to apply if conditional –
if local_rank in g0:
torch.distributed.all_reduce(t, group=g0)
elif local_rank in g1:
torch.distributed.all_reduce(t, group=g1)
– for participating in all_reduce so that simultaneous communications could happen in non-intersecting groups.
However, that results in NCCL INFO Call to connect returned Connection refused, retrying. Whereas, doing these all_reduce operations sequentially, which essentially would imply that the collective communications in different groups are sequentiallized, works fine.
Is there something that I am missing? |
st179122 | Hey @bapi
I tried the following, and it worked for me:
import torch
import torch.multiprocessing as mp
import os
def run(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
torch.cuda.set_device(rank)
# global group
torch.distributed.init_process_group(backend='nccl', rank=rank, world_size=world_size)
# torch.distributed.init_process_group(backend='gloo', init_method='env://')
g0 = torch.distributed.new_group(ranks=[0,1,2,3])
g1 = torch.distributed.new_group(ranks=[4,5,6,7])
# tensor to bcast over group
t = torch.tensor([1]).float().cuda().fill_(rank)
if rank < 4:
torch.distributed.all_reduce(t, group=g0)
else:
torch.distributed.all_reduce(t, group=g1)
print('rank: {} - val: {}'.format(rank, t.item()))
def main():
world_size = 8
mp.spawn(run,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__=="__main__":
main()
outputs:
$ python test.py
rank: 0 - val: 6.0
rank: 1 - val: 6.0
rank: 3 - val: 6.0
rank: 2 - val: 6.0
rank: 7 - val: 22.0
rank: 5 - val: 22.0
rank: 6 - val: 22.0
rank: 4 - val: 22.0 |
st179123 | bapi:
if local_rank in g0:
I am not sure if you acquired g0 and g1 from new_group API. If so, they are process group objects. So the above check would result in the following error:
TypeError: argument of type 'object' is not iterable |
st179124 | Dear @mrshenli, thanks very much for your response. Indeed that works.
Let me post my code adapted to your nice illustration that I actually need to work with as a part of a much longer implementation:
import torch
import torch.multiprocessing as mp
import os
def run(rank, groups, world_size):
print("Rank ", rank, " Group ", groups[str(rank)])
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
torch.cuda.set_device(groups[str(rank)]['gpu'])
# global group
torch.distributed.init_process_group(backend='nccl',
rank=rank,
world_size=world_size)
my_group = torch.distributed.new_group(ranks=groups[str(rank)]['grp'])
# tensor to bcast over group
t = torch.tensor([1]).float().cuda().fill_(rank)
torch.distributed.all_reduce(t, group=my_group)
print('rank: {} - val: {}'.format(rank, t.item()))
def assign_groups(num_masters, workers_per_master, available_gpus):
groups = {}
distranks = num_masters
gpu_allocated = 1
for i in range(num_masters):
my_group = [i]
gpus = [available_gpus[0]]
for j in range(workers_per_master):
my_group.append(distranks)
gpus.append(available_gpus[gpu_allocated])
distranks += 1
gpu_allocated += 1
for r, g in zip(my_group, gpus):
groups.update({
str(r): {
'grp': my_group,
'gpu': g
}
})
return groups
def main():
num_masters = 3
workers_per_master = 1
available_gpus = [0, 1, 2, 3]
groups = assign_groups(num_masters, workers_per_master, available_gpus)
world_size = 6
mp.spawn(run, args=(groups, world_size), nprocs=world_size, join=True)
if __name__ == "__main__":
main()
Essentially, I have 3 master processes here and each of the masters has a worker. The masters with their respective workers make a group. Thus there are three groups altogether. Although in the above code the masters are assigned to cuda:0 and each of the workers to the next cuda devices, it may change depending on the setting. Thus it is immaterial in this illustration whether I comment out the line torch.cuda.set_device(groups[str(rank)]['gpu']).
Now, for sure for each of the masters and workers, I have its assigned group as groups[str(rank)]['grp']. Thus, when I run this code it should behave like those distributed communications being called concurrently as also illustrated by your example. However, it results in NCCL INFO Call to connect returned Connection refused.
Definitely I am doing something wrong here, not sure what.
Thanks again. |
st179125 | I think I figured out what exactly I was missing. So, basically each of the new process groups should be defined at each of the processes before using them in any manner whatsoever.
A bit modified version of your example that most likely solves my purpose is the following:
import torch
import torch.multiprocessing as mp
import os
def run(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
# global group
torch.distributed.init_process_group(backend='nccl',
rank=rank,
world_size=world_size)
# torch.distributed.init_process_group(backend='gloo', init_method='env://')
g0 = torch.distributed.new_group(ranks=[0, 2])
g1 = torch.distributed.new_group(ranks=[1, 3])
# tensor to bcast over group
t = torch.tensor([1]).float().cuda().fill_(rank)
if rank in [0, 2]:
my_group = g0
else:
my_group = g1
torch.distributed.all_reduce(t, group=my_group)
print('rank: {} - val: {}'.format(rank, t.item()))
def main():
world_size = 4
mp.spawn(run, args=(world_size,), nprocs=world_size, join=True)
if __name__ == "__main__":
main()
It works.
Thanks again. |
st179126 | Happy to see that worked
Yes, the new_group 8 requires the following:
This function requires that all processes in the main group (i.e. all processes that are part of the distributed job) enter this function, even if they are not going to be members of the group. Additionally, groups should be created in the same order in all processes. |
st179127 | Is it possible to create a shared memory / tensors among openmpi spawned process group? I know this can be done with the processes created by torch.multiprocessing package. But in my case I am unable to make this package work with openmpi processes. |
st179128 | One solution might be
create a shared memory buffer using MPI
create a numpy ndarray from that shared memory (example 10)
create a PyTorch tensor from that numpy ndarray (example 7)
The numpy ndarray and the tensor should then share the same storage, which is on the shared memory. |
st179129 | I didn’t say it, but I meant sharing cuda tensors.
(e.g: doing something like Hogwild example- but having the model on GPU instead of CPU and share its parameters with 2 MPI process on the same node)
(For CPU tensors, your answer is correct, I read once a blog post demonstrating it, and all we need to modify is using shared MPI mem instead).
However, my 2 cents on this: it works out of the box only for host shared memory. Somehow couldn’t create shared+pinned host memory with pytorch easily. (I managed to do it with ugly hack: os.fork. I didn’t bother to make it work for MPI too) |
st179130 | Hi!
My training works if I use mutiple GPUs on a single machine (i.e. DataParallel). However, if I try to train on two machines (by using DistributedDataParallel), I get the following error on .backward():
one of the variables needed for gradient computation has been modified by
an inplace operation: [torch.cuda.FloatTensor [256, 256, 5, 5]] is at version 2;
expected version 1 instead. Hint: the backtrace further above shows the
operation that failed to compute its gradient. The variable in question was
changed in there or anywhere later. Good luck!
torch.autograd.set_detect_anomaly(True) points me to spectral_norm's code that updates the weight:
File ".../python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
hook(self, input)
File ".../python3.6/site-packages/torch/nn/utils/spectral_norm.py", line 99, in __call__
setattr(module, self.name, self.compute_weight(module, do_power_iteration=module.training))
File ".../python3.6/site-packages/torch/nn/utils/spectral_norm.py", line 86, in compute_weight
weight = weight / sigma
This definitely does not look like an inplace operation.
The same error occurs even if I use DistributedDataParallel for a single machine.
Any suggestions or ideas are more than welcome. Thanks in advance.
Versions
PyTorch: 1.1.0
CUDA: 9.0.176 |
st179131 | Hey @bornabesic,
Can you try if setting broadcast_buffers=False in DistributedDataParallel constructor works for you?
If not, can you try PyTorch v1.4?
If it still does not work, could you please provide code for minimum repro? Thanks! |
st179132 | @mrshenli
The problem happens regardless of the value of broadcast_buffers.
I managed to narrow down my search for the source of the problem.
The error occurs if I use multiple GPUs per machine AND multiple forward passes of the module before backward().
Otherwise, it works just fine. |
st179133 | multiple forward passes of the module before backward() .
Do you mean running multiple forward pass on the same DDP instance before launching the backward pass? If so, it is expected to hit errors due to prepare_for_backward 13. But I would expect it throws a different error though. A work around for this is to wrap your multiple forward pass into one YourModule.forward function, and then use DDP to wrap YourModule. |
st179134 | Hi,
At the moment I am trying to implement a meta-learning algorithms and the size of the model is quite large so I am also trying to use DataParallel. However, I am currently encountering an issue with one GPU taking the brunt of the load and running out of memory. This is since I generate weights for each sample in my batch, which means I have to loop over these weights and apply a functional conv and thus this operation cant be data paralleled and it ends up on the same GPU.
Is there any easy way to feed a batch of weights to a functional conv or are there any plans to implement this in pytorch in the near future?
Cheers,
Vincent |
st179135 | vpolflie:
Is there any easy way to feed a batch of weights to a functional conv
Not sure if I understand the request clearly, it would be helpful if you could share some pseudo code.
If all you need is scatter conv weights (one weight per sample) across different GPUs, looks like you can wrap that (samples + one conv layer per GPU) into a custom function? In the forward function, you can do sth like:
def forward(self, samples):
outputs = []
for sample in samples:
weight = generate_per_sample_weight(sample)
replace_conv_weight(self.conv, weight)
outputs.append(self.conv(sample))
return outputs |
st179136 | Sorry for the late response.
The code you provide is the pseudo code I would give and very similar to the code I have in my code base, one small change is that the weights are generated by different samples.
def forward(self, samples, reference_samples):
outputs = []
for sample in samples:
weight = generate_per_sample_weight(reference_samples)
replace_conv_weight(self.conv, weight)
outputs.append(self.conv(sample))
return outputs
However, the main issue with this is that pytorch doesn’t distribute this over multiple GPU’s because of the for loop. These calculation are all located on the first main GPU (with the standard dataparallel package)
I was wondering if there is an easy way to write something like this which still allows for data parallelisation:
weights: Batch x # INPUT FILTERS x # OUTPUT FILTERS x FILTER WIDTH x FILTER HEIGHT
samples: BATCH x CHANNELS x WIDTH x HEIGHT
def forward(self, samples, weights):
outputs = self.conv(sample, weights)
return outputs
So this self.conv function would then be one purely based on matrices like the original conv one, which should allow data parallelisation. |
st179137 | vpolflie:
I was wondering if there is an easy way to write something like this which still allows for data parallelisation:
This should work, as DataParallel simply replicates model and scatters input. (assuming self.conv is a customized conv layer that replaces weight) So if you wrap that with DataParallel, different thread/replica should see samples/weights on a different device. Did you encounter any issue when doing this? |
st179138 | At the moment I have the following psuedo code:
def batch_conv(x, weight, bias=None, stride=1):
for i in range(x.size()[0]):
yi = F.conv_transpose2d(x[i:i+1], weight=weight[i], bias=bias[i,:weight.size(2)], padding=1, stride=int(1/stride), output_padding=1, groups=groups)
y = concat(y, yi)
return y
class AdaptiveConv2d(nn.Module):
def __init__(self, *args, **kwargs):
super().__init__()
def forward(self, input, weight=None, bias=None, stride=1):
return batch_conv(input, weight, bias, stride)
However this doesn’t distribute properly and my assumption is that the data parallel isn’t able to handle the for loop in my code.
I haven’t tried implementing a conv layer that takes a batch of weights instead of a single sample. Since I have only switched to pytorch recently and I am a bit out of depth with this. |
st179139 | While using dataparallel is it possible to run processes with different number of epochs. Say on machine one I would like to run the process for 20 epochs and sync with master however after 20 epochs I would want to run completely on master. Is there a workaround for this? I used one of the samples given in tutorials however in the event that epochs are varying the master waits to sync up though the process has completed on another machine. |
st179140 | DDP instances need to all participate in the backward, otherwise it would hang. But there are work around. If you know that master would run say 100 epochs, and other nodes would run 80 epochs, you can call forward-backward on the DDP instance for 80 epochs. After that, you can delete the DDP instance, which will remove the DDP grad hooks accordingly. Then, you can run forward-backward on DDP.module (as DDP is deleted, you won’t be able to call DDP.module, but the program can still have a reference to the original local module separately) on master, and it will no longer trigger communications. |
st179141 | I am currently using SubsetRandomSampler to enforce a train-val split on my custom dataset, which works well on my current single-GPU configuration. However, in anticipation of moving to training on multiple nodes and GPUs, I wanted to see if it’s possible to “wrap” the splits created by SubsetRandomSampler somehow such that within my train split, I can replicate the functionality of DistributedSampler.
If not – what alternatives do I have for creating a train-val split? Must I create separate Dataset objects for the train and the val set? |
st179142 | I have many Distributed Data Parallel models (NOT Data Parallel!) trained with 8 gpus on a cluster. I have no problem correctly restoring them with same number of gpus (8). But wait time to get 8 is too long. So I want to restore them with only two.
I was wondering if it is even possible? if so what is the correct way to do it?
The script below (test.py) works fine with 8 gpus but produces erroneous results with 2 gpus (in the latter case, the results are the same as a model just initialized with random weights). I use “python -m torch.distributed.launch --nproc_per_node=num_gpus test.py” to run it from terminal.
import argparse
from torchvision.models import resnet18
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.distributed as dist
def cleanup():
dist.destroy_process_group()
def main():
torch.distributed.init_process_group(
backend='nccl', init_method='env://')
torch.cuda.set_device(args.local_rank)
model = resnet18()
model = model.to([args.local_rank][0])
model = DDP(model, device_ids=[args.local_rank],
output_device=[args.local_rank][0])
# load the model
checkpoint = torch.load(load_path)
state_dict = checkpoint['model_state_dict']
model.load_state_dict(state_dict)
dist.barrier()
cleanup()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="blah")
parser.add_argument("--local_rank", type=int)
args, _ = parser.parse_known_args()
main() |
st179143 | This should be possible, there is a map_location argument in torch.load. Checkout this 45.
The map_location can be a device, a function, a map etc. [API 8] |
st179144 | Thank you for your answer. The documentation does not include a working example for DDP. I have already tried many ways using map function None of which have worked so far. If you could show me a simple working example with mnist dataset to map 8 gpus to 1 or 2 or 4 gpus, or cpu with DistributedData parallel I would greatly appreciate it. |
st179145 | There are a few things to clarify.
As you are using the resnet18 from torchvision, the model only lives on a single GPU.
The launcher script you use starts num_gpus processes, and each process has its own DDP instance, dataloader, and the model replica.
With 1 and 2, your training scripts only need put the model to one GPU (you can use the rank as the device id), load the data into one GPU, and the DDP instance will handle the comm for you, and make sure that all model replicas are synchronized properly.
With the above 3, the question then would be “how do I load a model to a specific GPU device?”. And the answer is use map_local=torch.device(rank).
The following code works for me with the launching cmd
python -m torch.distributed.launch --nproc_per_node=2 test.py
import argparse
from torchvision.models import resnet18
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.distributed as dist
import torch
def cleanup():
dist.destroy_process_group()
def main(args):
torch.distributed.init_process_group(backend='nccl', init_method='tcp://localhost:23456', rank=args.local_rank, world_size=2)
torch.cuda.set_device(args.local_rank)
model = resnet18()
path = "save_model.pt"
if args.local_rank == 0:
# save CPU model
torch.save(model, path)
dist.barrier()
# local model to GPU
loaded_model = torch.load(path, map_location=torch.device(args.local_rank))
model = DDP(loaded_model, device_ids=[args.local_rank])
print(f"Rank {args.local_rank} traning on device {list(model.parameters())[0].device}")
# create a dedicated data loader for each process
cleanup()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="blah")
parser.add_argument("--local_rank", type=int)
args, _ = parser.parse_known_args()
main(args) |
st179146 | @ mrshenli thanks for your reply. I tried your method after a few minor correction but it still gives me the same erroneous result. I use this resnet 2 script to call the model.
I trained it on a large dataset and decided to save it periodically during training. Due to testing slowing down the training I decided to test it later using the saved models. When I train DDP with 8 gpus and test DDP with 8 gpus later, there is no issue. However, when I train DDP with 8 gpus and test DDP with 2 gpus later the problem occurs.
Also I only want to save and load the state_dict and not the entire model since it takes a lot of space.
I will create a working example for mnist shortly. |
st179147 | I tried your method after a few minor correction but it still gives me the same erroneous result. I use this resnet script to call the model.
You mean you saw error by running the script as is? What error did you see and what fix did you applied?
Also I only want to save and load the state_dict and not the entire model since it takes a lot of space.
It should be doable by just modifying two lines (save and load).
When I train DDP with 8 gpus and test DDP with 8 gpus later, there is no issue. However, when I train DDP with 8 gpus and test DDP with 2 gpus later the problem occurs.
The resnet link you posted points to torchvision resnet, so the model only lives on a single device. How did you go from training on 8 gpus to testing on 2 gpus? Did you do the following?
After training, use only rank 0 to save ddp.module to file.
For testing, as you no longer need comm across models, you don’t need DDP. You can spawn two processes, each load the saved module from file to its dedicated device by setting map_reduce. And use sth like all_gather to collect loss/accuracy data to rank 0? |
st179148 | @ mrshenli thanks again. I will try to answer all your inquiries with more detail in a bit today.
Unfortunately, I could not use your script as is because my already saved DDP (without “.module”) was already saved using a state_dict method.
So as for the minor changes, I did the following. :
def main(args):
torch.distributed.init_process_group(backend='nccl', init_method='env://')
test_loader = DataLoader(
test_dataset,
batch_size=args.test_batch_size,
shuffle=False,
num_workers=args.num_workers,
pin_memory=True)
model = get_model()
#############################################################
# My changes
torch.cuda.set_device(args.local_rank)
model = model.to([args.local_rank][0])
model = DDP(model, device_ids=[args.local_rank],
output_device=[args.local_rank][0])
checkpoint = torch.load(args.load_path) # , map_location=map_location)
state_dict = checkpoint['model_state_dict']
model.load_state_dict(state_dict)
##############################################################
dist.barrier()
test_function(model, test_loader, args.local_rank,args.load_path.with_suffix('.csv'))
I trained resnet18 from scratch. I just copied and used the resnet script locally.
As for your last two comments I did use just rank 0 to save the ddp, but I saved the state_dict() for ddp itself (without .module). That is why when I used your script I also had to remove the .module similar to this:
[solved] KeyError: ‘unexpected key “module.encoder.embedding.weight” in state_dict’ 5
Is it correctly to do so? |
st179149 | kazem:
As for your last two comments I did use just rank 0 to save the ddp, but I saved the state_dict() for ddp itself (without .module). That is why when I used your script I also had to remove the. Is it correctly to do so?
Yes, that is correct. The saved and loaded model type need to match. |
st179150 | kazem:
checkpoint = torch.load(args.load_path) # , map_location=map_location)
This line might cause a problem if the model was saved from a device that is not available on the machine that loads the model. But it should be OK in your case, as the model was saved from rank 0 (i.e., “cuda:0”), whose device is available in both envs. However, without map_location, it means the two DDP processes in testing are operating on the same GPU? That could also cause problems. |
st179151 | mrshenli Sorry for the late reply. Say I want to train the DDP model on 4 gpus and restore it as DDP on 2. I created an mnist example to illustrate my case while following your example. This whole script is borrowed from mnist, modified and split into three scripts:
mnist_common.py
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.distributed as dist
import argparse
from torchvision import datasets, transforms
from torch.utils.data.distributed import DistributedSampler
from torch.utils.data import DataLoader
def cleanup():
dist.destroy_process_group()
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device, non_blocking=True), \
target.to(device, non_blocking=True)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
def test(args, model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device, non_blocking=True), \
target.to(device, non_blocking=True)
output = model(data)
test_loss += F.nll_loss(
output,
target,
reduction='sum').item()
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
if args.local_rank == 0:
print('Test set: Average loss: {:.4f},'
' Accuracy: {}/{} ({:.2f}%)'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size',
type=int,
default=64,
metavar='N',
help='input batch size for training')
parser.add_argument('--test-batch-size',
type=int,
default=1000,
metavar='N',
help='input batch size for testing')
parser.add_argument('--epochs', type=int, default=14, metavar='N',
help='number of epochs to train (default: 14)')
parser.add_argument('--lr', type=float, default=1.0, metavar='LR',
help='learning rate (default: 1.0)')
parser.add_argument('--gamma', type=float, default=0.7, metavar='M',
help='Learning rate step gamma (default: 0.7)')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--local_rank', type=int)
args = parser.parse_args()
train_dataset = datasets.MNIST(
'../data',
train=True,
download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
train_sampler = DistributedSampler(
train_dataset,
num_replicas=torch.cuda.device_count(),
rank=args.local_rank)
train_loader = DataLoader(train_dataset,
batch_size=args.batch_size,
shuffle=(train_sampler is None),
num_workers=0,
pin_memory=True,
sampler=train_sampler)
test_loader = DataLoader(
datasets.MNIST(
'../data',
train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.test_batch_size,
shuffle=True,
num_workers=0,
pin_memory=True,)
mnist_train.py
from __future__ import print_function
import torch
import torch.optim as optim
import torch.distributed as dist
import torch.backends.cudnn as cudnn
from torch.optim.lr_scheduler import StepLR
from torch.nn.parallel import DistributedDataParallel as DDP
from mnist_common import args, Net, train_loader, train_sampler,\
test_loader, train, test, cleanup
def main(args):
dist.init_process_group(backend='nccl',
init_method='tcp://localhost:23456',
rank=args.local_rank,
world_size=torch.cuda.device_count())
torch.manual_seed(args.seed)
torch.cuda.set_device(args.local_rank)
cudnn.benchmark = True
model = Net()
model = model.to([args.local_rank][0]) # distribute the model
# Should we set the output_device value in DPP?
model = DDP(model, device_ids=[args.local_rank])
# , output_device=[args.local_rank][0])
optimizer = optim.Adadelta(model.parameters(), lr=args.lr)
scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)
for epoch in range(1, args.epochs + 1):
train_sampler.set_epoch(epoch)
train(args, model, args.local_rank,
train_loader, optimizer, epoch)
test(args, model, args.local_rank, test_loader)
scheduler.step(epoch)
# I intend to save the model
# AFTER some training not, not before
if args.local_rank == 0:
torch.save(model, "mnist_cnn.pt")
dist.barrier()
cleanup()
if __name__ == '__main__':
main(args)
Also I intend to test the model, only after training (sometimes up to a few days) has finished, by restoring the saved weights (or model).
2) mnist_test.py
from __future__ import print_function
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from mnist_common import args, Net, test_loader, test, cleanup
def main(args):
dist.init_process_group(backend='nccl',
init_method='tcp://localhost:23456',
rank=args.local_rank,
world_size=2)
torch.manual_seed(args.seed)
torch.cuda.set_device(args.local_rank)
model = torch.load("mnist_cnn.pt",
map_location=torch.device(args.local_rank))
model = DDP(model, device_ids=[args.local_rank])
print(f"Rank {args.local_rank} "
f"test on device {list(model.parameters())[0].device}")
test(args, model, args.local_rank, test_loader)
cleanup()
if __name__ == '__main__':
main(args)
The mnist_train.py runs sucessfully using
python -m torch.distributed.launch nproc_per_node=4 (or 2) mnist_train.py.
but when i run the test script using
python -m torch.distributed.launch nproc_per_node=2 mnist_test.py.
I get the following:
Rank 0 test on device cuda:0
Rank 1 test on device cuda:1
Test set: Average loss: 0.0274, Accuracy: 9913/10000 (99.13%)
RuntimeError: Expected tensor for argument #1 input to have
the same device as tensor for argument #2 weight;
but device 0 does not equal 1
(while checking arguments for cudnn_convolution) |
st179152 | Rank 0 test on device cuda:0
Rank 1 test on device cuda:1
Test set: Average loss: 0.0274, Accuracy: 9913/10000 (99.13%)
RuntimeError: Expected tensor for argument #1 input to have
the same device as tensor for argument #2 weight;
but device 0 does not equal 1
(while checking arguments for cudnn_convolution)
This means the first parameter of both models are placed onto the correct device. Can you do the same check for all parameters? i.e., making sure that all parameters are placed to the correct device.
output = model(data)
Before the line above in test(...), can you print the device ids of the data as well? Looks like the model and data device does not match on rank 1. |
st179153 | I see. But it should not be the case since both are moved to args.local_rank. Anyways, I did what you suggested and also changed the test-batch-size to 1024. Here’s the outcome:
Rank 0 test on device cuda:0
Rank 1 test on device cuda:1
after data=data.to(device,), before output=model(data) in test function, batch_idx: 0 device: 1
after data=data.to(device,), before output=model(data) in test function, batch_idx: 0 device: 0
after data=data.to(device,), before output=model(data) in test function, batch_idx: 1 device: 0
after data=data.to(device,), before output=model(data) in test function, batch_idx: 2 device: 0
after data=data.to(device,), before output=model(data) in test function, batch_idx: 3 device: 0
after data=data.to(device,), before output=model(data) in test function, batch_idx: 4 device: 0
after data=data.to(device,), before output=model(data) in test function, batch_idx: 5 device: 0
after data=data.to(device,), before output=model(data) in test function, batch_idx: 6 device: 0
after data=data.to(device,), before output=model(data) in test function, batch_idx: 7 device: 0
after data=data.to(device,), before output=model(data) in test function, batch_idx: 8 device: 0
after data=data.to(device,), before output=model(data) in test function, batch_idx: 9 device: 0
Test set: Average loss: 0.0275, Accuracy: 9913/10000 (99.13%)
RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 0 does not equal 1 (while checking arguments for cudnn_convolution) |
st179154 | Rank 1 test on device cuda:1
after data=data.to(device,), before output=model(data) in test function, batch_idx: 0 device: 1
This is weird. This means all model parameters are on cuda:1 and the input batch is also on cuda:1, but somehow one of the conv layers still throws device mismatch? I am not sure what happened here, but as the error suggests the mismatch occurs in cudnn_convolution, I would check if the input (x) of and the parameters the two conv layer (self.conv1 and self.conv2) match in the forward() function during testing.
BTW, two more comments on the script:
As you are only doing forward during testing, it is not necessary to use DDP there, as all comm in DDP occurs during backward.
I noticed you saving a DDP module and then load that DDP module and wrap it with another DDP module. Is this intentional? Shouldn’t mnist_train.py save model.module instead? (or use model.module to initialize DDP instances in testing) |
st179155 | Hello,
I am implementing MPI and compare to pytorch distributed,
it’s seems that there is mismatch between my implemetation and the one of pytorch with gloo.
does pytorch distributed send float32 or float64 tensors? |
st179156 | Liron_Mor_Yosef:
does pytorch distributed send float32 or float64 tensors?
It depends on what the scalar type of the tensor you passed to the communication API. Check out this. |
st179157 | I want to train only the last fc layer in my pretrained CNN model with distributed data parallel module.
I tried to make the whole model to eval mode and then change the fc layer to train.
model.module.eval()
model.module.fc.train()
and I got following error msg,
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/app/train_action_model_apex.py", line 466, in main_worker
train_model(args, root_dir)
File "/app/train_action_model_apex.py", line 235, in train_model
trainer.train_epoch(epoch, use_amp=True)
File "/app/trainers/action_model_trainer.py", line 202, in train_epoch
self.optimize_model(loss_dict[self.update_loss_name], use_amp)
File "/app/trainers/action_model_trainer.py", line 68, in optimize_model
scaled_loss.backward()
File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/apex/amp/handle.py", line 117, in scale_loss
yield (loss.float())*loss_scale
File "/app/trainers/action_model_trainer.py", line 68, in optimize_model
scaled_loss.backward()
File "/usr/local/lib/python3.5/dist-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: expected scalar type Half but found Float
How can I properly fix the problem? |
st179158 | Solved by pritamdamania87 in post #7
Looks like I see the same issue with 1.1.0 and 1.2.0, although it seems to work 1.3 onwards. Could you try out a version >= 1.3? |
st179159 | It seems you are using some higher-level wrapper with amp?
Could you post a code snippet to reproduce this issue, please? |
st179160 | @ptrblck, thanks for your reply. I’m using amp.
Here is a code snippet to reproduce the issue.
import torch
import torch.nn as nn
from apex import amp
import torch.distributed as dist
class SomeModel(nn.Module):
def __init__(self):
super(SomeModel, self).__init__()
self.conv = nn.Conv3d(
3,
16,
kernel_size=(1, 3, 3),
stride=1,
padding=(0, 1, 1),
bias=False)
self.bn1 = nn.BatchNorm3d(16)
self.relu = nn.ReLU(inplace=True)
self.avgpool = nn.AdaptiveAvgPool3d(1)
self.fc = nn.Linear(16, 3)
def forward(self, x):
x = self.conv(x)
x = self.bn1(x)
x = self.relu(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
print('init process group')
dist.init_process_group(backend='nccl', init_method='tcp://127.0.0.1:7001',
world_size=1, rank=0)
model = SomeModel().cuda()
criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9, )
model, optimizer = amp.initialize(model, optimizer, opt_level='O2')
print('ddp')
model = torch.nn.parallel.DistributedDataParallel(model, find_unused_parameters=True)
print('model train')
# model.train() # works
model.eval()
model.module.fc.train()
x = torch.randn((5, 3, 7, 7, 7), device='cuda')
y = torch.ones((5, ), device='cuda').long()
print('model forward')
outputs = model(x)
print('calculate loss')
loss = criterion(outputs, y)
print('model backward')
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
print('optimizer step')
optimizer.step()
Also, while I make the code snippet, I found that BN cause the issue.
Without BN, no error raised, though I’m not sure it works properly as intended. |
st179161 | @kkjh0723 I couldn’t get your original code to work since I kept running into this error
RuntimeError: Expected tensor for argument #2 'input' to have the same device as tensor for argument #3 'weight'; but device 1 does not equal 0 (while checking arguments for slow_conv_dilated_all_cuda_template)
I added device_ids=[0] to the DistributedDataParallel constructor and the code seems to work fine now:
import torch
import torch.nn as nn
from apex import amp
import torch.distributed as dist
class SomeModel(nn.Module):
def __init__(self):
super(SomeModel, self).__init__()
self.conv = nn.Conv3d(
3,
16,
kernel_size=(1, 3, 3),
stride=1,
padding=(0, 1, 1),
bias=False)
self.bn1 = nn.BatchNorm3d(16)
self.relu = nn.ReLU(inplace=True)
self.avgpool = nn.AdaptiveAvgPool3d(1)
self.fc = nn.Linear(16, 3)
def forward(self, x):
x = self.conv(x)
x = self.bn1(x)
x = self.relu(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
print('init process group')
dist.init_process_group(backend='nccl', init_method='tcp://127.0.0.1:7001',
world_size=1, rank=0)
model = SomeModel().cuda()
criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9, )
model, optimizer = amp.initialize(model, optimizer, opt_level='O2')
print('ddp')
model = torch.nn.parallel.DistributedDataParallel(model, find_unused_parameters=True, device_ids=[0])
print('model train')
# model.train() # works
model.eval()
model.module.fc.train()
x = torch.randn((5, 3, 7, 7, 7), device='cuda')
y = torch.ones((5, ), device='cuda').long()
print('model forward')
outputs = model(x)
print('calculate loss')
loss = criterion(outputs, y)
print('model backward')
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
print('optimizer step')
optimizer.step() |
st179162 | @pritamdamania87, Thanks for answering.
I tried with only 1 visible GPU using CUDA_VISIBLE_DEVICES=0 in my original code.
I also got the same error as you when multiple GPUs are visible.
And I still got the following error when I add device_ids=[0]
RuntimeError: expected scalar type Half but found Float
I wonder if different version of pytorch might cause the problem?
I’m currently using 1.1.0. |
st179163 | Looks like I see the same issue with 1.1.0 and 1.2.0, although it seems to work 1.3 onwards. Could you try out a version >= 1.3? |
st179164 | class ToyModule(torch.nn.Module):
def __init__(self) -> None:
super(ToyModule, self).__init__()
self.layer = torch.nn.Linear(2, 2)
self.expected_moved_cuda_tensor = torch.tensor([0, 2, 3])
def forward(self, input: torch.Tensor) -> torch.Tensor:
return self.layer(input)
toy_module = ToyModule()
toy_module.cuda()
When we call .cuda() all the parameters and buffers of the module are moved to the GPU:
next(toy_module.layer.parameters()).device
>>> device(type='cuda', index=0)
But when we inspect the tensor attribute of toy_module, we see device(type='cpu')?
toy_module.expected_moved_cuda_tensor.device
>>> device(type='cpu')
Is this expected or am I missing anything? Thank you. |
st179165 | danh:
When we call .cuda() all the parameters and buffers of the module are moved to the GPU:
self.expected_moved_cuda_tensor is neither a parameter nor a buffer, that’s why it’s device is unchanged. If you want to create a parameter and use it then you can do it as follows-
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear1 = nn.Linear(2, 1)
self.linear1.weight = torch.nn.Parameter(torch.ones(2, 1))
self.linear1.bias = torch.nn.Parameter(torch.zeros(1))
def forward(self, x):
x = self.linear1(x)
return x
You can even use those parameters in forward method like-
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.weight = torch.nn.Parameter(torch.ones(2, 1))
self.bias = torch.nn.Parameter(torch.zeros(1))
def forward(self, x):
# linear regression completely from scratch,
# using parameters created in __init__
x = torch.mm(x, self.weight) + self.bias
return x
And moving above model to .cuda() does move model parameters to cuda-
model = Model()
model.cuda()
print(model.weight.device) # prints device(type='cuda', index=0)
print(model.bias.device) # prints device(type='cuda', index=0) |
st179166 | Thanks a lot!
But isn’t it defeat the intuition of .cuda() if the Module tensor device stays unchanged? |
st179167 | Though .cuda() “should” do as you said, but I don’t think changing devices for all the torch.tensor attributes of a class inherited from nn.Module by default is good idea. In your use case it might be helpful, but in some case user may don’t that, so I think that’s the reason why it ain’t do that by default.
One more thing if you want to create just a constant tensor (not a parameter) then you can do that as
self.a_constant_tensor = nn.Parameter(torch.ones(2, 1), requires_grad = False)
and then use it in forward method.
OR you can use buffers, "which is recommended"
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.weight = torch.nn.Parameter(torch.zeros(2, 1))
self.bias = torch.nn.Parameter(torch.zeros(1))
self.register_buffer('a_constant_tensor', torch.tensor([0.5]))
def forward(self, x):
# linear regression completely from scratch,
# using parameters created in __init__
x = torch.mm(x, self.weight) + self.bias + self.a_constant_tensor
return x
model = Model().cuda()
Doing this wouldn’t consider self.a_constant_tensor as a parameter, so printing parameters wouldn’t return self.a_constant_tensor -
for param in model.parameters():
print(param)
# Only prints about self.weight and self.bias
'''
Parameter containing:
tensor([[0.],
[0.]], device='cuda:0', requires_grad=True)
Parameter containing:
tensor([0.], device='cuda:0', requires_grad=True)
''' |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.