id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st178468 | Hi,
I recently started looking into using pytorch for an active learning project and have some queries in using distributed data parallel. As per my understanding, distributed data parallel creates a process per gpu to run the model. Active learning approaches usually have two stages: annotation and training. In the annotation stage, I need to run the model on the unannotated samples and get annotations for selected samples and in the training stage, the model needs to be trained on the annotated dataset. I am trying to use DistributedDataParallel but am not sure how to.
My workflow will roughly look like:
Run the model on unannotated samples (across all gpus)
rank samples using the output from previous step based on some criterion (main worker, rank 0)
add new samples to the training dataset (all workers’ data loaders need to be updated) and then run training.
This cycle will repeat till a budget is hit (that budget needs to be synced across all processes in order to be able to terminate them).
I am not clear on how I get the output of the model from all processes (step 1) back into the main process, run things like the sample ranking (step 2) on the main process while the other processes wait for the input from the main process before starting the training cycle again. Also, how do I sync variables like budget across processes? It would be great if I can get a pointer to a resource which helps me understand this more. I have looked at the imagenet training script in the github repo but that didn’t help me understand this process.
Thanks! |
st178469 | Solved by iffiX in post #3
The collective communications are designed for tightly coupled processes, to give you a better idea of how they work, I borrowed some images from mpi:
[image]
[image]
[image]
Each collective communication primitive is a blocking process for all processes in your created processgroup (whether it … |
st178470 | I have been looking into the distributed operations defined here 1. Are operations like gather and reduce blocking? For example, after running the model on unlabeled data, I can call gather to get them on a single host. If I do that, do the other processes get blocked as well. The main process then needs to select samples for the labeled set and then broadcast them to all the processes. How do I block the other processes to receive this broadcast before beginning the training cycle? |
st178471 | The collective communications are designed for tightly coupled processes, to give you a better idea of how they work, I borrowed some images from mpi:
Each collective communication primitive is a blocking process for all processes in your created processgroup (whether it is the global WORLD group or a sub group of the global group).
I would suggest you do it in the following way:
import torch.distributed as dist
# make sure all processes in your_group will run this step
model = (model, ..., output_device=your_device, process_group=your_group)
your input per process= ...
# output is only visible to that process
# parameters are synchronized behind the stage
output = model(your_input)
# make sure all processes in your_group will run this step
dist.gather(..., dst=0)
# perform ranking on process 0
dist.scatter(..., src=0)
# all workers are now synchronized
One crutial thing to notice: the receive buffer (tensor) must be equal or larger than the tensor you have sent, otherwise nasty errors will be thrown. |
st178472 | Thank you very much for the images! I was confused with some of the collective operations and that image really helped clear things up. |
st178473 | Hi everyone, I’m dealing with a very bizarre problem that I’m not sure how to solve.
I’m finding that whenever I use DistributedDataParallel where each process creates a Dataloader with num_workers > 0 set, I see that in nvidia-smi that several worker processes are spawned that are each utilizing about 500 MiB.
Whenever I don’t use DistributedDataParallel, the only process I see utilizing GPU memory is the main process (no worker processes are shown). I am extremely confident that there is no code in the dataset get methods or collate functions that moves the tensors to GPU memory.
This issue can persist in two different ways. If I wrap my model in DDP first, I notice these processes claim GPU memory when the dataloader is being created. Vice-versa, if I make my dataloader first, I notice these processes claim GPU memory when the model is being wrapped in DDP.
I don’t believe this is expected behavior, I don’t understand why the worker processes would even want GPU memory when they should just be handling fetching the data into RAM. Any guidance on this problem will be appreciated. |
st178474 | Solved by ayalaa2 in post #6
I have narrowed it down! I found that the dataset object is being assigned a class method from a nn.Module class. Thus the solution is to not assign a module’s class method to the dataset. Here is a code snippet that reproduces the problem. NOTE that this problem does not occur if DDP is not used.
I… |
st178475 | Continue discussion from Should I use 'spawn' method to start multi-processing? 53
Could you please show the output of nvdia-smi? Do you see any process id that appears on both GPUs?
A min repro code will be helpful. |
st178476 | I’m in the process of reproducing the issue in a separate code-base. In an initial isolated test, I’m actually not seeing this issue, but I am seeing it the code-base I’m working on… I’m not able to share that code-base, but I’m going to continue trying to replicate the problem.
For now, here’s a screenshot of what I’m seeing:
3190 and 3257 both look normal to me. The other processes are workers that are spawn that start to utilize GPU memory somehow. I’m not very familiar with the inner-workings of CUDA, but could it be that the worker processes are running into code that thinks it needs a CUDA context? Thus allocating room for it on each worker process? |
st178477 | ayalaa2:
I’m not very familiar with the inner-workings of CUDA, but could it be that the worker processes are running into code that thinks it needs a CUDA context? Thus allocating room for it on each worker process?
I am not sure, but the size (445MB) does look like CUDA context. cc DataLoader experts @VitalyFedyunin @SimonW @vincentqb |
st178478 | Looks like CUDA context to me. So probably your dataset code somehow uses CUDA. Also, if you are using spawn (default for windows and mac), make sure to wrap all code that may initialize CUDA in if __name__ == '__main__' |
st178479 | I have narrowed it down! I found that the dataset object is being assigned a class method from a nn.Module class. Thus the solution is to not assign a module’s class method to the dataset. Here is a code snippet that reproduces the problem. NOTE that this problem does not occur if DDP is not used.
I would like to have a high level understanding of this issue. Why would assigning a class method to the dataset cause the worker processes to have CUDA context? Additionally, why is this only occurring when using the DDP module?
import torch.multiprocessing as mp
import torch.distributed as dist
import torch.nn as nn
import torchvision
import os
from torch.nn.parallel import DistributedDataParallel as DDP
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
import time
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(10, 10)
def forward(self, x):
return x
def preprocess(self, batch):
return batch
def main():
mnist_dataset = MNIST(
'mnist', train=True, download=True,
transform=torchvision.transforms.ToTensor()
)
model = SimpleModel()
is_parallel = True
if is_parallel:
mp.spawn(train_wrapper, nprocs=2, join=True,
args=(model, mnist_dataset))
else:
train(model, mnist_dataset, False, 'cuda:1')
def train_wrapper(rank, model, train_data):
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '12345'
devices = ['cuda:1', 'cuda:2']
dist.init_process_group(backend='nccl', rank=rank, world_size=len(devices))
train(model, train_data, True, devices[rank])
dist.destroy_process_group()
def train(model, train_data, is_parallel, device):
# NOTE: THIS IS THE PROBLEM LINE
# If you comment this line out, the issue no longer persists
train_data.preprocess = model.preprocess
train_loader = DataLoader(
train_data,
num_workers=4,
batch_size=16
)
model = model.to(device)
if is_parallel:
model = DDP(model, device_ids=[device])
x = iter(train_loader)
time.sleep(10)
if __name__ == "__main__":
main() |
st178480 | I use torch.distributed to train my model.
When I use torch.multiprocessing.set_start_method('spawn'), the gpu usage memory will be increased with the increasing num_workers.
However, when I don’t use torch.multiprocessing.set_start_method('spawn'), the gpu usage memory is consistent with different num_workers.
Therefore, should I use spawn to start multi-processing ?
What’s the influence of the set_start_method('spawn') ?
Why the increasing num_workers increases the gpu usage memory when spawn mode? |
st178481 | When using GPU, I believe spawn should be used, as according to this multiprocessing best practices 79 page, CUDA context (~500MB) does not fork. This could also be the reason why you see increasing GPU memory footprint when using more spawned processes, as each process will have its dedicated CUDA context.
Curious, can you allocate a different GPU to each different process? Or do they have to use the same GPU in your application? |
st178482 | For distributed parallel training, we do allocate one process for one GPU to train model.
However, in the dataloader, because it also adopts the multi-processing, the increasing num_workers will increase the GPU memory footprint.
In my opinion, the increased GPU memory footprint due to the increasing num_workers can not help to training (faster of better performance). |
st178483 | If those dataloader processes do not use GPU, I guess they can use fork instead?
cc @vincentqb for DataLoader questions. |
st178484 | I want to bump this post, I’m having this exact problem right now. Each additional worker my processes are spawning for data loading is resulting in an increase of about 500 MiB per worker.
Does anyone know how to fix this? |
st178485 | ayalaa2:
Each additional worker my processes are spawning for data loading is resulting in an increase of about 500 MiB per worker.
The 500MB is about the size of a CUDA context. Does those processes use GPUs? |
st178486 | They shouldn’t.
Specifically, I’m trying to run my code with 2 GPUs, thus I spawn two processes. Each initialize their own DataLoader object with num_workers=2. I find that there’s 6 processes in total using GPU memory, I definitely expect the two main processes to utilize GPU memory but I don’t understand why the dataloader worker processes are also utilizing GPU memory.
If I run my code without DDP, I do not see this issue. |
st178487 | Let’s continue discussions in DistributedDataParallel causes Dataloader workers to utilize GPU memory 123 |
st178488 | Hi, I’m trying to implement a customized DataParallel, where I want to manually reduce gradients from replicas in multiple GPUs. The reason I’m doing it is that I want each GPU to accumulate gradients for several iterations before doing one reduce gradients operation across multi-GPUs, hence reducing the communication overhead. When using 2 GPUs it seems to work. However, when using 4 GPUs after some iteration the program simply crashes without any error message. I think it’s some lower level C code crashes. Do you have any idea why? Here is my code:
`class DataParallelAccumulation(DataParallel):
def init(self, module, device_ids=None, output_device=None, dim=0):
super().init(module, device_ids=device_ids, output_device=output_device, dim=dim)
if len(self.device_ids) > 1:
self.replicas = self.replicate(self.module, self.device_ids, detach=True)
def forward(self, *inputs, **kwargs):
if not self.device_ids:
return self.module(*inputs, **kwargs)
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
if len(self.device_ids) == 1:
return self.module(*inputs[0], **kwargs[0])
outputs = self.parallel_apply(self.replicas[:len(inputs)], inputs, kwargs)
return outputs
def reduce_grads(self):
if len(self.device_ids) > 1:
for parameters in zip(self.module.parameters(), *[r.parameters() for r in self.replicas]):
destination_device = parameters[0].get_device()
parameters[0].grad = (comm.reduce_add([p.grad for p in parameters[1:]],
destination=destination_device))
def synchronize(self):
if len(self.device_ids) > 1:
self.replicas = self.replicate(self.module, self.device_ids, detach=True)
def replicate(self, module, device_ids, detach=False):
replicas = replicate(module, device_ids, detach=detach)
return replicas
` |
st178489 | Solved by Xiaopeng_Li in post #4
Close this issue because similar issue is already discussed in pull-19577 and pull-21736, and the PyTorch team have worked out the solution, that is using no_sync() context manager. |
st178490 | Hey @Xiaopeng_Li, which version of PyTorch are you using? After https://github.com/pytorch/pytorch/pull/33907, the replicate method is no longer supposed to be used this way. It will only replicate non-leaf parameters, and as a result, replicated_model.parameters() will be empty. Can you double check if this is the case in your dev env?
If you really need this, you can try to access the _former_parameters attribute in replicated models. See the code below. But there is no guarantee on how long this attribute can stay.
github.com
pytorch/pytorch/blob/0edbe6b063d2525ceaf89f0c603f6e35b3118686/torch/nn/parallel/distributed.py#L347-L357
def parameters(m, recurse=True):
def model_parameters(m):
ps = m._former_parameters.values() \
if hasattr(m, "_former_parameters") \
else m.parameters(recurse=False)
for p in ps:
yield p
for m in m.modules() if recurse else [m]:
for p in model_parameters(m):
yield p |
st178491 | @mrshenli Thanks for the reply. I’m using 1.4.0. In this version the replicated_model.parameters() do exist, and I understand that in 1.5.0 it no longer exist.
As I mentioned, it works for several iteration of training, but crashes at some point. Every time it crashes at different iterations, pretty weired. Am I doing reduce_grads correctly? |
st178492 | Close this issue because similar issue is already discussed in pull-19577 6 and pull-21736 6, and the PyTorch team have worked out the solution, that is using no_sync() context manager. |
st178493 | i want to use DDP to train model ,use num 6th,7th gpu.
this code core is :
import datetime
import torch.utils.data.dataloader as dataloader
import sys
import pdb
from termcolor import cprint
import torch
from matplotlib import cm
from tqdm import tqdm
import time
import shutil
import nibabel as nib
import pdb
import argparse
import os
from torch.utils.data.distributed import DistributedSampler
if __name__ == '__main__':
parser = argparse.ArgumentParser('setup record')
# default method l
parser.add_argument("--DDP", default=True)
# optimizer and scheuler
parser.add_argument("--lr", default=5e-5)
parser.add_argument("--opt", default='adam',
choices=['adam', 'sgd'])
parser.add_argument("--num_gpus", default=[6, 7])
parser.add_argument("--bs", default=4)
args = parser.parse_args()
os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(map(str, args.num_gpus))
if args.DDP:
print('init ddp')
os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(map(str, args.num_gpus))
torch.distributed.init_process_group(backend="nccl")
local_rank = torch.distributed.get_rank()
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
if args.seed is not None:
numpy.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
torch.backends.cudnn.benchmark = True
net = build_model()
if len(args.num_gpus) > 1 and not args.DDP:
# pdb.set_trace()
# net = BalancedDataParallel(args.maingpu_bs, net, dim=0).cuda()
net = torch.nn.DataParallel(net).cuda()
print('net to multi-gpu')
if len(args.num_gpus) > 1 and args.DDP:
print('using DDP model')
net = torch.nn.parallel.DistributedDataParallel(net,
device_ids=[local_rank],
output_device=local_rank, find_unused_parameters=True)
dataset = build_datset()
if args.DDP:
print('using ddp dataloader')
train_loader = torch.utils.data.DataLoader(dataset, batch_size=args.bs, shuffle=True,
num_workers=args.works, pin_memory=True,
sampler=DistributedSampler(dataset))
else:
train_loader = torch.utils.data.DataLoader(dataset, batch_size=args.bs, shuffle=True,
num_workers=args.works, pin_memory=True)
""""""
"""
Training
"""
print('setting dataloader')
what should i do???
thank you |
st178494 | Hey @xwjBupt
you will need to launch two DDP processes, operating on cuda:6 and cuda:7 respectively.
CUDA_VISIBLE_DEVICES need to be set before launching the main process.
Sth like:
CUDA_VISIBLE_DEVICES=6,7 python main.py
And then, in main.py, you can launch two DDP sub-processes and set device_ids to [0] and [1] respectively. See this example: https://pytorch.org/docs/stable/notes/ddp.html 70 |
st178495 | Hi, first time posting, apologies if I made a mistake in the categorization or anything.
I am trying to have a generator load objects in the background, and I am encountering an extremely strange bug which I have distilled down to the following example. When I try to run the following code, it hangs when trying to call torch.zeros in split_loader_creator, but if I remove the seemingly irrelevant line torch.zeros(152*4, 168*4).float() near the end, it seemingly can make progress. It also seems fine if I change 1524 and 1684 to much smaller numbers. This is on PyTorch 1.5.1, and I do not encounter the issue on 1.4.0. Am I somehow doing this multiprocessing incorrectly? Would really appreciate any help, and please let me know if I can provide more information that might be helpful.
import torch
import multiprocessing
import atexit
def split_loader_creator():
for i in range(20):
yield torch.zeros(10, 170, 70)
def background_generator_helper(gen_creator):
def _bg_gen(gen_creator, conn):
gen = gen_creator()
while conn.recv():
try:
conn.send(next(gen))
except StopIteration:
conn.send(StopIteration)
return
except Exception:
import traceback
traceback.print_exc()
parent_conn, child_conn = multiprocessing.Pipe()
p = multiprocessing.Process(target=_bg_gen, args=(gen_creator, child_conn))
p.start()
atexit.register(p.terminate)
parent_conn.send(True)
while True:
parent_conn.send(True)
x = parent_conn.recv()
if x is StopIteration:
return
else:
yield x
def background_generator(gen_creator): # get several processes in the background fetching batches in parallel to keep up with gpu
generator = background_generator_helper(gen_creator)
while True:
batch = next(generator)
if batch is StopIteration:
return
yield batch
torch.zeros(152*4, 168*4).float()
data_loader = background_generator(split_loader_creator)
for i, batch in enumerate(data_loader):
print(i) |
st178496 | kevinyang:
This is on PyTorch 1.5.1, and I do not encounter the issue on 1.4.0.
This sounds like a regression. And I confirm I can reproduce the reported behavior.
Could you please submit an issue to https://github.com/pytorch/pytorch/issues 4 to report this bug? |
st178497 | Hi,
I have a question about the DistributedDataParallel Module.
In the forward function call of the DDP, there is a sycn_param call which broadcasts the model parameters from rank 0 to all the other ranks to keep the same state of the model across all processes.
Forwad Function in DDP 2
Sync_call 7
I want to clarify the actual use of this function call.
Think of a case where all processes started and did initialize the model using a set of known weights. Then across all processes, the weights are uniform. In such a case is this useful? Because in every forward call, this synch is called and it could slow down the training. Please correct me if I am wrong.
Another case is where a user wants to add a mutation to the model weights in each process after a synch call (all-reduce). Think of an instance where a mutation helps to discover diversity in the training. And the all-reduce step ensembles this diversity. In such cases, having this call sync_param would cancel the effect the user expects?
What is the main expectation of the sync call? I referred to the docs 2, but I couldn’t get a clear picture.
Thank You,
Vibhatha. |
st178498 | Solved by mrshenli in post #2
Hey @Vibhatha_Abeykoon
That _sync_params call is there for two purposes:
Intra rank/process parameter sync: this is only for the legacy single-process multi-device use case, where each process operates on multiple model replicas. And this is not a recommended way to use DDP.
Inter rank/process bu… |
st178499 | Hey @Vibhatha_Abeykoon
That _sync_params call is there for two purposes:
Intra rank/process parameter sync: this is only for the legacy single-process multi-device use case, where each process operates on multiple model replicas. And this is not a recommended way to use DDP.
Inter rank/process buffer sync: this does not sync parameters, and this will be skipped if your model does not have buffers (e.g., running_mean in BatchNorm layers).
What is the main expectation of the sync call?
For many use cases, that _sync_params will be a no-op. |
st178500 | @mrshenli. Thank you for the response.
I see. The first point is clear to me.
About the second point, so this would be the case to support the functionality
of some specific layers like running_mean in BatchNorm layers. But for
general cases, this is also skipped, so there won’t be such synchs.
Is this an assumption that we can make when we use DDP? |
st178501 | Vibhatha_Abeykoon:
About the second point, so this would be the case to support the functionality
of some specific layers like running_mean in BatchNorm layers. But for
general cases, this is also skipped, so there won’t be such synchs.
Is this an assumption that we can make when we use DDP?
Yep, this is correct. |
st178502 | I have the following (minimal) code that runs on GPU and I’m trying to run it in multiple GPUs using nn.DataParallel:
import math
import torch
import pickle
import time
import numpy as np
import torch.optim as optim
from torch import nn
print('device_count()', torch.cuda.device_count())
for i in range(torch.cuda.device_count()):
print('get_device_name', torch.cuda.get_device_name(i))
def _data(dimension, num_examples):
num_mislabeled_examples = 20
ground_truth_weights = np.random.normal(size=dimension) / math.sqrt(dimension)
ground_truth_threshold = 0
features = np.random.normal(size=(num_examples, dimension)).astype(
np.float32) / math.sqrt(dimension)
labels = (np.matmul(features, ground_truth_weights) >
ground_truth_threshold).astype(np.float32)
mislabeled_indices = np.random.choice(
num_examples, num_mislabeled_examples, replace=False)
labels[mislabeled_indices] = 1 - labels[mislabeled_indices]
return torch.tensor(labels), torch.tensor(features)
class tools:
def __init__(self):
self.name = 'x_2'
def SomeFunc(self, model, input_):
print(model.first_term(input_)[0])
class predictor(nn.Module):
def __init__(self, dim):
super(predictor, self).__init__()
self.weights = torch.nn.Parameter(torch.zeros(dim, 1, requires_grad=True))
self.threshold = torch.nn.Parameter(torch.zeros(1, 1, requires_grad=True))
def first_term(self, features):
return features @ self.weights
def forward(self, features):
return self.first_term(features) - self.threshold
class HingeLoss(nn.Module):
def __init__(self):
super(HingeLoss, self).__init__()
self.relu = nn.ReLU()
def forward(self, output, target):
all_ones = torch.ones_like(target)
labels = 2 * target - all_ones
losses = all_ones - torch.mul(output.squeeze(1), labels)
return torch.norm(self.relu(losses))
class function(object):
def __init__(self, epochs):
dim = 10
N = 100
self.target, self.features = _data(dim, N)
self.epochs = epochs
self.model = predictor(dim).to('cuda')
self.optimizer = optim.SGD(self.model.parameters(), lr=1e-3)
self.target = self.target.to('cuda')
self.features = self.features.to('cuda')
self.loss_function = HingeLoss().to('cuda')
self.tools = tools()
def train(self):
self.model.train()
for epoch in range(self.epochs):
self.optimizer.zero_grad()
output = self.model(self.features)
# self.tools.SomeFunc(self.model, self.features)
print(output.is_cuda)
loss = self.loss_function(output, self.target)
loss.backward()
print('For epoch {}, loss is: {}.'.format(epoch, loss.item()))
self.optimizer.step()
def main():
model = function(1000)
print(torch.cuda.device_count())
if False: # This is Flag
if torch.cuda.device_count() > 1:
model.model = nn.DataParallel(model.model)
t = time.time()
model.train()
print('elapsed: {}'.format(time.time() - t))
if __name__ == '__main__':
main()
I have 4 GPU cards (device_count = 4). When I set the flag (indicated with the comment This is Flag) to True, it takes 15.78 seconds to run the code. When I set it to False, it takes 0.71 seconds. Why? How could it be fixed?
When I uncomment the line self.tools.SomeFunc(self.model, self.features) and set the flag to True, I receive the following error:
AttributeError: ‘DataParallel’ object has no attribute ‘first_term’
How should I fix this? Thanks! |
st178503 | blade:
I have 4 GPU cards (device_count = 4). When I set the flag (indicated with the comment This is Flag ) to True , it takes 15.78 seconds to run the code. When I set it to False, it takes 0.71 seconds. Why? How could it be fixed?
One thing is that you will need a torch.cuda.synchronize() before calling time.time() to make sure all pending CUDA kernals in the stream are finished.
You can also use elapsed_time 1 to measure. See discussion here 2.
If you are looking for the most performant solution, DistributedDataParallel should be the way to go. [example 3]
When I uncomment the line self.tools.SomeFunc(self.model, self.features) and set the flag to True , I receive the following error:
Looks like self.model is a DataParallel instance? If so, DataParallel does not have the first_term attribute. If this attribute is on the model instance you passed to DataParallel, you can access the original model instance through self.model.module (see DataParallel code here 3) which should have the first_term attribute. |
st178504 | Regarding the first part, calling torch.cuda.synchronize() did not help. It still seems that using 4 GPUs instead of 1 makes the code 15 times slower.
After fixing the second part, my output for
print(model.module.first_term(input_)[0])
is always on the first worker:
tensor([0.0020], device='cuda:0', grad_fn=<SelectBackward>)
So it is not even sharing the work between the different worker. |
st178505 | blade:
So it is not even sharing the work between the different worker.
What is “worker” here? Do you mean GPU? If so, isn’t that expected? IIUC, in your code, model is the DataParallel instance. So only the forward function of model would utilize multiple GPUs. See the data parallel implementation below:
github.com
pytorch/pytorch/blob/4104ab8b187d6023b0cc80b77e6944126009c532/torch/nn/parallel/data_parallel.py#L141-L156 1
def forward(self, *inputs, **kwargs):
if not self.device_ids:
return self.module(*inputs, **kwargs)
for t in chain(self.module.parameters(), self.module.buffers()):
if t.device != self.src_device_obj:
raise RuntimeError("module must have its parameters and buffers "
"on device {} (device_ids[0]) but found one of "
"them on device: {}".format(self.src_device_obj, t.device))
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
if len(self.device_ids) == 1:
return self.module(*inputs[0], **kwargs[0])
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
outputs = self.parallel_apply(replicas, inputs, kwargs)
return self.gather(outputs, self.output_device) |
st178506 | Regarding the slowdown, I can reproduce it locally with two GPUs:
Using DataParallel on 2 GPUs
For epoch 999, loss is: 8.4603910446167.
elapsed: 3.1627652645111084
Not using DataParallel
For epoch 999, loss is: 8.323615074157715.
elapsed: 1.192000389099121
Then I go back to inspect the model:
def __init__(self, dim):
super(predictor, self).__init__()
self.weights = torch.nn.Parameter(torch.zeros(dim, 1, requires_grad=True))
self.threshold = torch.nn.Parameter(torch.zeros(1, 1, requires_grad=True))
My suspicion is that the model parameters are so light that the overhead of GIL contention, input scattering, output gathering, and model replication in DataParallel forward pass overshadows the speedup brought by multi-gpu training. Are these parameters used in real applications or are you trying to profile DataParallel performance? |
st178507 | mrshenli:
See the data parallel implementation below:
I didn’t quite get your point on this one.
mrshenli:
My suspicion is that the model parameters are so light that the overhead of GIL contention, input scattering, output gathering, and model replication in DataParallel forward pass overshadows the speedup brought by multi-gpu training.
That makes sense. Let me try it on a more sophisticated code and update the answer.
mrshenli:
Are these parameters used in real applications or are you trying to profile DataParallel performance?
I’m just making sure I understand how to use DataParallel correctly. Although way simpler, this sample code mimics the general structure of my actual code. So, there is nothing wrong with my implementation?
mrshenli:
you can access the original model instance through self.model.module (see DataParallel code here ) which should have the first_term attribute.
Thanks, based on your response I used this wrapper to get access to attributes without altering the code itself. |
st178508 | blade:
I didn’t quite get your point on this one.
I might have misunderstood the original question.
After fixing the second part, my output for
print(model.module.first_term(input_)[0])
is always on the first worker:
tensor([0.0020], device='cuda:0', grad_fn=<SelectBackward>)
Is the question about why the output of print(model.module.first_term(input_)[0]) always on cuda:0? |
st178509 | Yes: I’m under impression that there are 2 ways of parallelizing a PyTorch code: DistributedDataParallel and DataParallel. In the former each layer of the network is assigned to a particular processor while in the latter, each processor takes a portion of the training data and all the processors go through all the code (like here). Although DistributedDataParallel is preferred (though I’m not sure why, except for multi-node processing, perhaps?), it looks hairy and I decided to start with DataParallel. Hence, I expected all the processors call first_term() when they get to that part of the code. What am I missing? |
st178510 | blade:
Although DistributedDataParallel is preferred (though I’m not sure why)
This is mostly due to performance reasons. As of today, DataParallel (DP) replicates model in its forward pass, while DistributedDataParallel (DDP) replicates models in its ctor. That means DP would replicate model once in every iteration. Besides, DP also suffers from GIL contention as it is single-process-multi-thread. DDP does not hit this problem, as each model replica runs in its own process. More info about DDP can be found here 1 and here 2.
Hence, I expected all the processors call first_term() when they get to that part of the code. What am I missing?
What happens in DP’s forward function is: 1) replicate model to all devices 2) scatter inputs to all devices 3) launch multiple threads in parallel, where each threads processes an input split using one model replica on one device 4) gather outputs to the same device.
Given the above, if you change the predictor code to the following. You will see it prints multiple devices.
class predictor(nn.Module):
....
def forward(self, features):
print(self.first_term(features).device)
return self.first_term(features) - self.threshold
However, for the following code:
def SomeFunc(self, model, input_):
print(model.first_term(input_)[0])
If it is called outside of a forward pass or if the model argument is not a model replica (the self argument in predictor.forward method), then it won’t show different devices. |
st178511 | blade:
mrshenli:
My suspicion is that the model parameters are so light that the overhead of GIL contention, input scattering, output gathering, and model replication in DataParallel forward pass overshadows the speedup brought by multi-gpu training.
That makes sense. Let me try it on a more sophisticated code and update the answer.
So I tried on another code, with 1 GPU my code ran in 434 sec while with 2 GPUs it took 864 sec. So it shouldn’t be from the price we pay for parallelization. Also, using your line print(self.first_term(features).device) it uses all processors at each step so the code is not running in series by each GPU. |
st178512 | blade:
So I tried on another code, with 1 GPU my code ran in 434 sec while with 2 GPUs it took 864 sec.
Can we profile how much of the 434s are spent in the forward pass when DP is not present? And how much of that is spent on GPU? This can be measured using elapsed_time 2 . See this discussion.
Note that multi-thread cannot parallelize normal Python ops due to Python GIL, and the parallelism only kicks in when the execution does not require GIL (e.g., CPU/GPU ops that explicitly drops GIL). |
st178513 | Not sure if I’m doing right. Am I doing it right for the sample code below?
def main():
model = function(1000)
print(torch.cuda.device_count())
if True:
if torch.cuda.device_count() > 1:
model.model = MyDataParallel(model.model)
start = time.monotonic()
s = torch.cuda.current_stream()
e_start = torch.cuda.Event(enable_timing=True)
e_finish = torch.cuda.Event(enable_timing=True)
s.record_event(e_start)
model.train()
torch.cuda.synchronize()
s.record_event(e_finish)
e_finish.synchronize()
end = time.monotonic()
print('Forward latency is: {} '.format(e_start.elapsed_time(e_finish)))
print("end - start = ", end - start)
if __name__ == '__main__':
main() |
st178514 | Yes, it’s just a wrapper so that I can access the attributes (so I don’t need to add .module). If so, below is the output for my actual code:
Forward latency is: 437033.9375
end - start = 437.031393879035
So what do you think is wrong with the parallel implementation that takes double as much time to run with two GPUs? |
st178515 | blade:
So what do you think is wrong with the parallel implementation that takes double as much time to run with two GPUs?
This is surprising to me. I would assume at least the CUDA ops can run in parallel. Could you please share the full code used in this test? We will investigate. |
st178516 | Hello @mrshenli, I was wondering if you have any comments yet as to why this happens? Thanks. |
st178517 | I was a bit confused how DDP (with NCCL) reduces gradients and the effect this has on the learning-rate that needs to be set.
Would the below example be a correct way to interpret this -> that DDP and DP should have the same learning-rate if scaled out to the same effective batch-size?
Assume set contains 80 samples
Single-gpu LR = 0.1
Total-grad-distance = LR * g * (samples/batch-size)
Single-gpu
batch = 8
gradient = 8g/8 = g
total-grad-distance = 0.1 * g * 10 = g
DP (2-gpu, 1 node)
batch = 16
gradient = 16g/16 = g
total-grad-distance = 0.1 * g * 5 = 0.5g
-> thus scale LR by 2
DDP (2-gpu, 1 node OR 1-gpu, 2 nodes)
batch-per-process = 8
gradient = (8g/8) + (8g/8) / 2 = g
total-grad-distance = 0.1 * g * 5 = 0.5g
-> thus scale LR by 2?
Or does allreduce just sum the gradients 7 in which case:
and ProcessGroup::allreduce() to sum gradients.
DDP (2-gpu, 1 node OR 1-gpu, 2 nodes)
batch-per-process = 8
gradient = (8g/8) + (8g/8) = 2g
total-grad-distance = 0.1 * 2g* 5 = g
-> thus leave LR the same as single-GPU |
st178518 | If you maintain the same batch size between single GPU and DP/DDP, according to your calculations you do not need to adjust LR ?
PS:
In DDP grads are averaged: https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel 99
During the backwards pass, gradients from each node are averaged.
PPS: https://arxiv.org/pdf/1706.02677.pdf 46
Linear Scaling Rule: When the minibatch size is multiplied by k, multiply the learning rate by k. |
st178519 | More discussions can be found at Should we split batch_size according to ngpu_per_node when DistributedDataparallel 418 |
st178520 | I am using DistributedDataParallel to train the model on multiple GPUs. If I would like to stop the process early, how could I achieve it? Thanks. |
st178521 | Is this about uneven inputs on different processes? See:
https://github.com/pytorch/pytorch/issues/33148 79
https://github.com/pytorch/pytorch/issues/38174 47
If all processes know when to exit, simply break the loop would work. The tricky case is when one processes breaks the loop but other processes proceed as mentioned in the above two issues. |
st178522 | mrshenli:
If all processes know when to exit, simply break the loop would work. The tricky case is when one processes breaks the loop but other processes proceed as mentioned in the above two issues.
Indeed this is what I meet. one process breaks the loop while others continue. The condition when the process breaks is the loss in eval dataset increases (overfitting). Do you have any ideas? Thanks. |
st178523 | Ideally, we should address this in DDP and close https://github.com/pytorch/pytorch/issues/38174 52. Before that takes place, you can use all_reduce synchronize some signal across all processes. See Multiprocessing - Barrier Blocks all Processes? 44
One thing to note is that, this might have perf impacts, especially when the model is light and its forward pass runs faster than communicating the signal. |
st178524 | Thanks for your help. Probably I would set a fixed epoch number to address this, which is simple thought is not optimal. |
st178525 | I’m building a distributed parameter/server type architecture and wanting to communicate model updates through table solutions on Azure.
I’m having a hard time finding any useful information about saving a models state_dict into a redis cache. I’ve given up on Azure Cosmos tables because of the size limit (64kb) per entity and looked toward redis since model state_dict params/weights are much larger, even for a small model.
Does anyone have any recommendations for me on how to pursue this? |
st178526 | Solved by mrshenli in post #7
I see, for this use case, an alternative is to use torch.distributed.rpc to connect the parameter server with trainers, and then let the parameter server periodically flush checkpoints to the external storage. So that you don’t have to pay the checkpointing overhead in every iteration.
Some related… |
st178527 | Is this question about 1) whether Redis is an appropriate storage to save model states or 2) how to configure Azure to run Redis or 3) how to build parameter server using PyTorch? |
st178528 | Azure is simply the platform I’m developing on. I am looking for the answer to 1) Is redis an appropriate storage to save model parameters and weights?
I’ve recently learned about redisAI but it does not have an Azure equivalent service and would have to be deployed on a dedicated VM. |
st178529 | bPangolin:
I am looking for the answer to 1) Is redis an appropriate storage to save model parameters and weights?
Hmm, isn’t this mainly depend on the data size and IO pattern? Or does being a DNN model make any difference? |
st178530 | I am curious why are you communicating model updates via external DB. Normally model updates are communicated via collective communication ranks or something like EASGD (https://arxiv.org/abs/1412.6651 2). Is your goal: debugging, logging, or improved reliability here? Seems like updating model via external DB would be a performance hit? |
st178531 | I’m testing out parallelizing across multiple worker nodes in a parameter-server type architecture. I’m using redisAI to handle the model weight and gradient sharing between primary and worker nodes. At each node the worker retains it’s own parameter set and delivers gradients to the primary node, which updates the global model. I have four workers, so I combine each workers update together
Sequentially performing worker updates, global update, and then worker reads of updated global model is a performance hit. I’m also testing out a update-and-continue scheme where the worker will push it’s gradients to the global model and then continue with it’s own path instead of adjusting to the global model.
beta = 0.25
gmsd = model.state_dict()
for name, param in model.named_parameters():
worker_001_data = redisai_conn.tensorget(f'worker_001:{name}_grad')
worker_002_data = redisai_conn.tensorget(f'worker_002:{name}_grad')
worker_003_data = redisai_conn.tensorget(f'worker_003:{name}_grad')
worker_004_data = redisai_conn.tensorget(f'worker_004:{name}_grad')
tens = worker_001_data*beta + worker_002_data*beta + worker_003_data*beta + worker_004_data*beta
worker_ten = torch.from_numpy(tens).to(self.device)
if gmsd[name].grad == None:
gmsd[name].grad = (worker_ten)
else:
gmsd[name].grad.copy_(worker_ten)
model.load_state_dict(gmsd)
My goal is improved reliability in the global models predictions. |
st178532 | I see, for this use case, an alternative is to use torch.distributed.rpc to connect the parameter server with trainers, and then let the parameter server periodically flush checkpoints to the external storage. So that you don’t have to pay the checkpointing overhead in every iteration.
Some related resources:
Building HogWild! 1 PS using torch.distributed.rpc: https://pytorch.org/tutorials/intermediate/rpc_param_server_tutorial.html 7
Batch updating PS (requires v1.6+): https://github.com/pytorch/tutorials/blob/release/1.6/intermediate_source/rpc_async_execution.rst 4 |
st178533 | DistributedDataParallel imagenet training example breaks throwing the following error: RuntimeError: NCCL error in: /tmp/pip-req-build-4baxydiv/torch/lib/c10d/ProcessGroupNCCL.cpp:400, unhandled cuda error on running it on a single node with 10 GPUs. The same runs perfectly fine as soon as the number of GPUs in the environment is set to 8. For DataParallel, somewhere it is mentioned that at present it does not run on more than 8 GPUs; however, I could not find similar info about DDP (I may have missed it). Moreover, as all the processes load their own module locally on each of the devices without a broadcast during initialization, is not it unexpected? |
st178534 | Which NCCL version are you using?
Could you rerun your script with NCCL_DEBUG=DEBUG python ... and post the log here? |
st178535 | ptrblck:
NCCL_DEBUG=DEBUG
Thanks for the response.
That flag does not generate much info other than the usual output. I reran the code with NCCL_DEBUG=INFO, below is the log (machine is named gpu123):
$ NCCL_DEBUG=INFO python main.py -a resnet18 --dist-url ‘tcp://127.0.0.1:6840’ --dist-backend ‘nccl’ --multiprocessing-distributed --world-size 1 --local_rank 0 ~/tiny-imagenet-200
Use GPU: 3 for training Use GPU: 5 for training Use GPU: 7 for training Use GPU: 9 for training Use GPU: 6 for training Use GPU: 1 for training Use GPU: 0 for training Use GPU: 4 for training => creating model ‘resnet18’ Use GPU: 8 for training => creating model ‘resnet18’ Use GPU: 2 for training => creating model ‘resnet18’ => creating model ‘resnet18’ => creating model ‘resnet18’ => creating model ‘resnet18’ => creating model ‘resnet18’ => creating model ‘resnet18’ => creating model ‘resnet18’ => creating model ‘resnet18’ gpu123:7932:7932 [0] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7932:7932 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7932:7932 [0] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> NCCL version 2.4.8+cuda10.0 gpu123:7941:7941 [9] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7936:7936 [4] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7934:7934 [2] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7933:7933 [1] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7937:7937 [5] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7935:7935 [3] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7941:7941 [9] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7936:7936 [4] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7934:7934 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7933:7933 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7937:7937 [5] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7935:7935 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7940:7940 [8] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7940:7940 [8] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7941:7941 [9] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> gpu123:7935:7935 [3] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> gpu123:7933:7933 [1] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> gpu123:7936:7936 [4] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> gpu123:7934:7934 [2] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> gpu123:7937:7937 [5] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> gpu123:7940:7940 [8] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> gpu123:7932:8011 [0] NCCL INFO Setting affinity for GPU 0 to 0fff gpu123:7941:8013 [9] NCCL INFO Setting affinity for GPU 9 to 0fff gpu123:7940:8025 [8] NCCL INFO Setting affinity for GPU 8 to 0fff gpu123:7934:8022 [2] NCCL INFO Setting affinity for GPU 2 to 0fff gpu123:7937:8023 [5] NCCL INFO Setting affinity for GPU 5 to 0fff gpu123:7935:8017 [3] NCCL INFO Setting affinity for GPU 3 to 0fff gpu123:7936:8020 [4] NCCL INFO Setting affinity for GPU 4 to 0fff gpu123:7933:8018 [1] NCCL INFO Setting affinity for GPU 1 to 0fff gpu123:7939:7939 [7] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7939:7939 [7] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7939:7939 [7] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> gpu123:7938:7938 [6] NCCL INFO Bootstrap : Using [0]ib0:10.36.192.223<0> gpu123:7938:7938 [6] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). gpu123:7938:7938 [6] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB ; OOB ib0:10.36.192.223<0> gpu123:7939:8027 [7] NCCL INFO Setting affinity for GPU 7 to 0fff gpu123:7938:8029 [6] NCCL INFO Setting affinity for GPU 6 to 0fff gpu123:7932:8011 [0] NCCL INFO Channel 00 : 0 1 2 3 4 5 6 7 8 9 gpu123:7935:8017 [3] NCCL INFO Ring 00 : 3[3] -> 4[4] via P2P/IPC gpu123:7938:8029 [6] NCCL INFO Ring 00 : 6[6] -> 7[7] via P2P/IPC gpu123:7936:8020 [4] NCCL INFO Ring 00 : 4[4] -> 5[5] via P2P/IPC gpu123:7937:8023 [5] NCCL INFO Ring 00 : 5[5] -> 6[6] via P2P/IPC gpu123:7939:8027 [7] NCCL INFO Ring 00 : 7[7] -> 8[8] via P2P/IPC gpu123:7941:8013 [9] NCCL INFO Ring 00 : 9[9] -> 0[0] via P2P/IPC gpu123:7940:8025 [8] NCCL INFO Ring 00 : 8[8] -> 9[9] via P2P/IPC gpu123:7933:8018 [1] NCCL INFO Ring 00 : 1[1] -> 2[2] via P2P/IPC gpu123:7932:8011 [0] NCCL INFO Ring 00 : 0[0] -> 1[1] via P2P/IPC gpu123:7934:8022 [2] NCCL INFO Ring 00 : 2[2] -> 3[3] via P2P/IPC gpu123:7941:8013 [9] transport/p2p.cc:574 NCCL WARN failed to open CUDA IPC handle : 60 peer mapping resources exhausted gpu123:7941:8013 [9] NCCL INFO init.cc:669 -> 1 gpu123:7941:8013 [9] NCCL INFO init.cc:815 -> 1 gpu123:7941:8013 [9] NCCL INFO init.cc:951 -> 1 gpu123:7941:8013 [9] NCCL INFO misc/group.cc:69 -> 1 [Async thread] gpu123:7932:8011 [0] transport/p2p.cc:604 NCCL WARN failed to open CUDA IPC handle : 60 peer mapping resources exhausted gpu123:7932:8011 [0] NCCL INFO init.cc:679 -> 1 gpu123:7932:8011 [0] NCCL INFO init.cc:815 -> 1 gpu123:7932:8011 [0] NCCL INFO init.cc:951 -> 1 gpu123:7932:8011 [0] NCCL INFO misc/group.cc:69 -> 1 [Async thread] gpu123:7935:8017 [3] NCCL INFO comm 0x2abf58001e10 rank 3 nranks 10 cudaDev 3 nvmlDev 3 - Init COMPLETE gpu123:7934:8022 [2] NCCL INFO comm 0x2b8408001e10 rank 2 nranks 10 cudaDev 2 nvmlDev 2 - Init COMPLETE gpu123:7937:8023 [5] NCCL INFO comm 0x2b011c001e10 rank 5 nranks 10 cudaDev 5 nvmlDev 5 - Init COMPLETE gpu123:7936:8020 [4] NCCL INFO comm 0x2b4b28001e10 rank 4 nranks 10 cudaDev 4 nvmlDev 4 - Init COMPLETE gpu123:7938:8029 [6] NCCL INFO comm 0x2b58d8001e10 rank 6 nranks 10 cudaDev 6 nvmlDev 6 - Init COMPLETE gpu123:7933:8018 [1] NCCL INFO comm 0x2af070001e10 rank 1 nranks 10 cudaDev 1 nvmlDev 1 - Init COMPLETE gpu123:7939:8027 [7] NCCL INFO comm 0x2b2180001e10 rank 7 nranks 10 cudaDev 7 nvmlDev 7 - Init COMPLETE gpu123:7940:8025 [8] NCCL INFO comm 0x2b015c001e10 rank 8 nranks 10 cudaDev 8 nvmlDev 8 - Init COMPLETE /nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 11 leaked semaphores to clean up at shutdown len(cache)) /nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 11 leaked semaphores to clean up at shutdown len(cache)) /nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 11 leaked semaphores to clean up at shutdown len(cache)) /nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 11 leaked semaphores to clean up at shutdown len(cache)) /nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 11 leaked semaphores to clean up at shutdown len(cache)) /nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 11 leaked semaphores to clean up at shutdown len(cache)) Traceback (most recent call last): File “main.py”, line 425, in main() File “main.py”, line 109, in main mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args)) File “/nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 171, in spawn while not spawn_context.join(): File “/nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 118, in join raise Exception(msg) Exception: – Process 9 terminated with the following error: Traceback (most recent call last): File “/nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 19, in _wrap fn(i, *args) File “/nfs/scistore08/alistgrp/bchatter/workspace/async-opt/dist_data_parallel/imagenet_training/main.py”, line 151, in main_worker model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) File “/nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/distributed.py”, line 298, in init self.broadcast_bucket_size) File “/nfs/scistore08/alistgrp/bchatter/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/distributed.py”, line 480, in _distributed_broadcast_coalesced dist._broadcast_coalesced(self.process_group, tensors, buffer_size) RuntimeError: NCCL error in: /tmp/pip-req-build-4baxydiv/torch/lib/c10d/ProcessGroupNCCL.cpp:400, unhandled cuda error |
st178536 | Hey @bapi
For DataParallel, somewhere it is mentioned that at present it does not run on more than 8 GPUs;
Just curious, could you please point me to the doc with this claim? This is new to me, I wasn’t aware there is such a limitation in DP.
however, I could not find similar info about DDP (I may have missed it).
We recently tested DDP using 256 GPUs, and it runs fine. Could this error be sth specific to the imagenet example? cc @fmassa for vision questions
Moreover, as all the processes load their own module locally on each of the devices without a broadcast during initialization, is not it unexpected?
There is a broadcast in DDP ctor. Please see the link below:
github.com
pytorch/pytorch/blob/c71ec1c717e5b225f28ef3bacde416c69f3c4d77/torch/nn/parallel/distributed.py#L324-L329 2
# Sync params and buffers
module_states = list(self.module.state_dict().values())
if len(module_states) > 0:
self._distributed_broadcast_coalesced(
module_states,
self.broadcast_bucket_size) |
st178537 | Hi @mrshenli, (sorry for such a late read/response to this message)
So, I figured out that using either the flag NCCL_P2P_LEVEL=0 or NCCL_P2P_DISABLE=1, DDP runs fine on a machine with >8 GPUs. Here I am specifically talking about 10 GPUs in the same machine. I am not sure about the topology of the 256 GPUs that you mentioned.
Right now, I can not locate the doc page where I had seen that nn.dataparallel does not run (efficiently or something?) on more than 8 GPUs in a machine. Maybe it was in the previous version of the doc? In any case, I will confirm this one by testing it on a machine with 10 GPUs that I have access to.
Thanks. |
st178538 | bapi:
I am not sure about the topology of the 256 GPUs that you mentioned.
In that test, each node only has 8 GPUs.
So, I figured out that using either the flag NCCL_P2P_LEVEL=0 or NCCL_P2P_DISABLE=1, DDP runs fine on a machine with >8 GPUs.
I see. We don’t have tests covering > 8GPUs per node cases yet. This is an important message, thanks for sharing! |
st178539 | Hi, there. Recently I used multiple cpu cores for training. On my own PC, macbook 2017 (1 cpu 4 cores), I just set os.environ[‘MASTER_PORT’] as one single value and multiprocesses could run on the same server. However, when I migrated codes to the cluster in order to use more cores, I need to give a different value to os.environ[‘MASTER_PORT’] for each process. If not, the permission denied as below.
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Permission denied
I don’t know much about the reason here, could someone explain it? |
st178540 | Hey @Meraki the MASTER_PORT needs to be set to the same value for all processes, otherwise, they cannot conduct rendezvous correctly. This error might be caused by other configurations. It might be helpful to print the value of the following environment variables on all processes right before init_process_group is called: “MASTER_ADDR”, “MASTER_PORT”, “RANK”, “WORLD_SIZE” |
st178541 | Yep, you are right. I just figure out the reason is that on Linux, you need root permissions to open a port below 1024. That’s why the permison denied.
Thanks for your help.@mrshenli |
st178542 | Hi,
I am running a simple application on two machines with 2 gpus each, it is throwing me an error. The application works fine on a single machine with 2gpus.
The NCCL info error info in here
dml4:26072:26072 [1] NCCL INFO Bootstrap : Using [0]XXXXXX<0> [1]enp0s20f0u1u6:169.254.95.120<0> [2]virbr0:192.168.122.1<0>
dml4:26071:26071 [0] NCCL INFO Bootstrap : Using [0]XXXXX<0> [1]enp0s20f0u1u6:169.254.95.120<0> [2]virbr0:XXXXX<0>
dml4:26072:26072 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
dml4:26071:26071 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
dml4:26072:26072 [1] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB enp88s0:9.1.44.100<0>
dml4:26071:26071 [0] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB enp88s0:9.1.44.100<0>
dml4:26072:26240 [1] NCCL INFO Setting affinity for GPU 1 to ffff,f00000ff,fff00000
dml4:26071:26242 [0] NCCL INFO Setting affinity for GPU 0 to 0fffff00,000fffff
dml4:26072:26240 [1] NCCL INFO CUDA Dev 1[1], IB NIC distance : SYS
dml4:26071:26242 [0] NCCL INFO CUDA Dev 0[0], IB NIC distance : NODE
dml4:26071:26242 [0] NCCL INFO Ring 00 : 1 -> 2 [receive] via NET/IB/0
dml4:26071:26242 [0] NCCL INFO Ring 00 : 2[0] -> 3[1] via direct shared memory
dml4:26072:26240 [1] NCCL INFO Ring 00 : 3 -> 0 [send] via NET/IB/0
dml4:26072:26240 [1] misc/ibvwrap.cc:252 NCCL WARN Call to ibv_reg_mr failed
dml4:26072:26240 [1] NCCL INFO transport/net_ib.cc:601 -> 2
dml4:26072:26240 [1] NCCL INFO include/net.h:24 -> 2
dml4:26072:26240 [1] NCCL INFO transport/net.cc:360 -> 2
dml4:26072:26240 [1] NCCL INFO init.cc:669 -> 2
dml4:26072:26240 [1] NCCL INFO init.cc:815 -> 2
dml4:26072:26240 [1] NCCL INFO init.cc:951 -> 2
dml4:26072:26240 [1] NCCL INFO misc/group.cc:69 -> 2 [Async thread]
dml4:26071:26242 [0] misc/ibvwrap.cc:252 NCCL WARN Call to ibv_reg_mr failed
dml4:26071:26242 [0] NCCL INFO transport/net_ib.cc:601 -> 2
dml4:26071:26242 [0] NCCL INFO include/net.h:24 -> 2
dml4:26071:26242 [0] NCCL INFO transport/net.cc:388 -> 2
dml4:26071:26242 [0] NCCL INFO init.cc:679 -> 2
dml4:26071:26242 [0] NCCL INFO init.cc:815 -> 2
dml4:26071:26242 [0] NCCL INFO init.cc:951 -> 2
dml4:26071:26242 [0] NCCL INFO misc/group.cc:69 -> 2 [Async thread]
Traceback (most recent call last):
File “conv_dist.py”, line 118, in
main()
File “conv_dist.py”, line 51, in main
mp.spawn(train, nprocs=args.gpus, args=(args,), join=True)
File “/work/tools/envs/dine2/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method=‘spawn’)
File “work/tools/envs/dine2/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 158, in start_processes
while not context.join():
File “/work/tools/envs/dine2/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 119, in join
raise Exception(msg)
Exception:
– Process 0 terminated with the following error:
Traceback (most recent call last):
File “/work/tools/envs/dine2/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 20, in _wrap
fn(i, *args)
File “/us4j4248/pt_dist/conv_dist.py”, line 75, in train
model = DDP(model, device_ids=[gpu])
File “/work/tools/envs/dine2/lib/python3.6/site-packages/torch/nn/parallel/distributed.py”, line 285, in init
self.broadcast_bucket_size)
File “/work/tools/envs/dine2/lib/python3.6/site-packages/torch/nn/parallel/distributed.py”, line 496, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1591914838379/work/torch/lib/c10d/ProcessGroupNCCL.cpp:514, unhandled system error, NCCL version 2.4.8
ps: I have removed the ip addresses above.
Thanks |
st178543 | Solved by lcw in post #8
An error in ibv (i.e., InfiniBand verbs) indicates problems with GPU Direct, which NCCL tries to use for RDMA but which Gloo doesn’t. You can try to confirm that this is indeed the issue by running with the NCCL_IB_DISABLE=1 env var. That may work but would probably end up being slower. In that case… |
st178544 | Hey Shen,
I am running a simple application using <torch.distributed.launch> on two machines each having 2 gpus. It throws me the above error.
CUDA 10.2
Pytorch 1.5.1
NCCL backend as
libnccl-devel-2.5.6-1+cuda10.2.x86_64
libnccl-2.5.6-1+cuda10.2.x86_64
libnccl-static-2.5.6-1+cuda10.2.x86_64
But torch.cuda.nccl.version() shows me 2.4.
Ran NCCL tests – they are working fine.
def train(args):
current_env = os.environ.copy()
dist.init_process_group(backend='nccl', init_method='env://')
model = ConvNet()
torch.cuda.set_device(args.local_rank)
model.cuda(args.local_rank)
batch_size = 256
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(args.local_rank)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
model = DDP(model, device_ids=[args.local_rank])
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset, num_replicas=int(current_env["WORLD_SIZE"]), rank=args.local_rank)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
Plz let me know if you need any more info.
Thanks |
st178545 | Hey @nash, thanks for sharing the code. The code looks correct to me, except the rank argument in DistributedSampler might need to be global rank (i.e., current_env["RANK"]) instead of the local rank? But this is not the cause of the error, as the error was thrown in DDP ctor when it tries to run broadcast.
Regarding the error:
Curious, does gloo backend work?
Could you please print out the env vars set by the launching script? Sth as what this example 4 does to env_dict.
Could you try run the following command on both nodes? If the return IP is not what you intended, you might need to set either GLOO_SOCKET_IFNAME or NCCL_SOCKET_IFNAME as mentioned here 9 depending on which backend you are using.getent hosts `hostname` |
st178546 | Hey @mrshenli thanks for your response.
Gloo backend works perfectly fine with this code. NCCL backend throws the error.
2
here are the outputs of environ based on Gloo backend.
Node 0 output
Initializing process group with: {'MASTER_ADDR': 'a.b.c.d', 'MASTER_PORT': '12354', 'RANK': '1', 'WORLD_SIZE': '4'}
[3196] Initializing process group with: {'MASTER_ADDR': 'a.b.c.d', 'MASTER_PORT': '12354', 'RANK': '0', 'WORLD_SIZE': '4'}
[3196] world_size = 4, rank = 0, backend=gloo
[3197] world_size = 4, rank = 1, backend=gloo
Node 1 output
Initializing process group with: {'MASTER_ADDR': 'a.b.c.d', 'MASTER_PORT': '12354', 'RANK': '2', 'WORLD_SIZE': '4'}
[89966] Initializing process group with: {'MASTER_ADDR': 'a.b.c.d', 'MASTER_PORT': '12354', 'RANK': '3', 'WORLD_SIZE': '4'}
[89966] world_size = 4, rank = 3, backend=gloo
[89965] world_size = 4, rank = 2, backend=gloo
the return is the domain name addresses of each host (a.b.c.d) etc. I am not setting anywhere SOCKET_IFNAME
My system has NCCL 2.5 while torch (torch.cuda.nccl.version()) shows 2.4.8
Could this be the problem?
How can I upgrade NCCL version in torch.
Thanks. |
st178547 | If Gloo works fine then it means all the env vars and configs should be correct.
How can I upgrade NCCL version in torch.
That will require modify pytorch NCCL submodule and recompile. Like this https://github.com/pytorch/pytorch/pull/40622 5. You can pull this PR can compile from it, which should be using NCCL 2.7.3.
Another option is to set export USE_SYSTEM_NCCL=1, and then compile from source, then it should be using the 2.5 that you installed on the machine. |
st178548 | Thanks @mrshenli.
As you mentioned that pytorch has NCCL precompiled and both nodes use the same version of NCCL.
Does that mean NCCL version is not the problem?
Did you notice this “misc/ibvwrap.cc:252 NCCL WARN Call to ibv_reg_mr failed” in the logs.
I tried to build torch from source, I hit another roadblock there as well.
“Performing Test SUPPORT_GLIBCXX_USE_C99 - Failed”
thanks. |
st178549 | An error in ibv (i.e., InfiniBand verbs) indicates problems with GPU Direct, which NCCL tries to use for RDMA but which Gloo doesn’t. You can try to confirm that this is indeed the issue by running with the NCCL_IB_DISABLE=1 env var. That may work but would probably end up being slower. In that case you might want to follow the instructions here to troubleshoot InfiniBand issues: https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#gpu-direct 58 |
st178550 | Hi @lcw, thanks. Indeed it is NCCL issue, and setting NCCL_IB_DISABLE =1 works fine. |
st178551 | Hey folks,
I have a server with large amounts of RAM, but slow storage and I want to speed up training by having my dataset in the RAM. I also use DDP which means there are going to be multiple processes per GPU. On top of that, I use multiple num_workers in my dataloader so having a simple Python list as a caxhe would mean multiple caches which eats up a lot of memory.
The natural solution is to use shared memory. And this is how I use it
In the launch process, do
if __name__ == '__main__':
import argparse
import os
import torch.multiprocessing as mp
import ctypes
shared_base = mp.Array(ctypes.c_byte, 80000*3*256*256, lock=True)
with shared_base.get_lock():
shared_array = np.ctypeslib.as_array(shared_base.get_obj())
img_cache = shared_array.reshape(80000, 256, 256, 3)
use_cache = mp.Array(ctypes.c_float, 1, lock=False)
use_cache[0] = -3.14
This cache is sent to each process as
mp.spawn(main, nprocs=ngpus_per_node, args=(args, img_cache, use_cache))
Each process takes it this shared memory and gives it to a dataset object
dset = SVAE_FFHQ(args.data_folder, transform, 32, 64, args.hidden_size, img_cache, use_cache)
The SVAE_FFHQ class does looks like this:
class SVAE_FFHQ(data.Dataset):
def __init__(self, root_dir, transform=None, top_size=32, bottom_size=64, dim=256, img_cache=None, use_cache=None):
super().__init__()
...
self.img_cache = img_cache
self.use_cache = use_cache
def _use_cache(self):
self.use_cache[0] = 3.14
print('Using cache')
def __getitem__(self, idx):
path, lbl = self.dset.samples[idx]
if self.use_cache[0] < 0:
with open(path, 'rb') as f:
img = Image.open(f)
img = img.convert('RGB')
img = img.resize((256, 256), Image.LANCZOS)
self.img_cache[idx] = deepcopy(np.asarray(img))
del img
return self.transform(Image.fromarray(self.img_cache[idx], 'RGB'))
This to me seems fine, but what happens is
The shared memory is pickled and not replicated across the multiple spawned processes which means my memory requirments increase with the number of processes spawned.
This isn’t any faster than reading data off of slow HDDs
Any insight into these problems?
Thanks! |
st178552 | I cannot replicate your first problem with the below code snippet, memory is not pickled as you can see in the screenshot, only the main process holds the 1GB shared memory array:
import ctypes
import time
import numpy as np
import multiprocessing as mp
def subproc(array):
with array.get_lock():
np_array = np.ctypeslib.as_array(array.get_obj())
print(np_array[1000])
# keep process showing in "top"
begin = time.time()
while time.time() - begin < 10:
a = 1 * 100
if __name__ == "__main__":
array = mp.Array(ctypes.c_byte, 1000*1024*1024, lock=True)
with array.get_lock():
np_array = np.ctypeslib.as_array(array.get_obj())
np_array.fill(100)
print("allocated")
p = mp.Process(target=subproc, args=(array,))
p2 = mp.Process(target=subproc, args=(array,))
p.start()
p2.start()
print("started")
# keep process showing in "top"
begin = time.time()
while time.time() - begin < 10:
a = 1 * 100
p.join()
p2.join()
print("joined")
I think the second question might be related to Image.fromarray, see this issue 7 |
st178553 | Thanks for implementing this by yourself.
I must ask a couple of questions though:
What was the version of Python you used? I know that some stuff changed in how Python pickles data since version 3.8.
I use mp.spawn to start the processes, where mp is imported from torch.multiprocessing.
Could that be a problem?
Anyways this looks interesting. Thanks again. |
st178554 | I am using the default python3.5.2 installation from ubuntu16.04
I have added mp.set_start_method("spawn") and still cannot reproduce your issue, it would be better if you can share a minimal problematic code snippet |
st178555 | Thanks again.
Let me also try your code on my machine. And create a minimal code implementation that I can share here. |
st178556 | This is the code I run
import ctypes
import time
import numpy as np
import torch.multiprocessing as mp
def subproc2(gpu, array):
with array.get_lock():
np_array = np.ctypeslib.as_array(array.get_obj())
print(np_array[1000])
if gpu == 0:
np_array[999] = 0
elif gpu == 1:
np_array[1000] = 1
# keep process showing in "top"
begin = time.time()
while time.time() - begin < 10:
a = 1 * 100
return 0
if __name__ == "__main__":
mp.set_start_method('spawn')
array = mp.Array(ctypes.c_byte, 1000*1024*1024, lock=True)
with array.get_lock():
np_array = np.ctypeslib.as_array(array.get_obj())
np_array.fill(100)
print("allocated")
mp.spawn(subproc2, args=(array,), nprocs=2)
# keep process showing in "top"
print(np_array[999:10001])
begin = time.time()
while time.time() - begin < 100:
a = 1 * 100
print('done')
The observations are :
Memory is only ever allocated in the main process.
If I don’t set the start method to spawn on Linux, I get a SIGSEGV even on access in subproc2.
Now what I want to test next is if I pass this memory on to dataloaders as is - an np.array, if it will increase memory consumption.
Let me do that now. |
st178557 | import ctypes
import time
import numpy as np
import torch.multiprocessing as mp
import torch
from torch.utils.data import Dataset
class SharedDataset1(Dataset):
def __init__(self, shared_mem):
super(SharedDataset1, self).__init__()
self.shared_mem = shared_mem
def __len__(self):
return 10000
def __getitem__(self, idx):
return torch.randn(3, 32, 32)
def np_before(gpu, array, num_workers):
with array.get_lock():
np_array = np.ctypeslib.as_array(array.get_obj())
dataset = SharedDataset1(np_array)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=10, num_workers=num_workers)
for img in dataloader:
c = img+0.1
time.sleep(3)
if __name__ == "__main__":
mp.set_start_method('spawn')
array = mp.Array(ctypes.c_byte, 10000*1024*1024, lock=True)
with array.get_lock():
np_array = np.ctypeslib.as_array(array.get_obj())
np_array.fill(100)
print("allocated")
mp.spawn(np_before, args=(array, 1), nprocs=2)
print("started")
begin = time.time()
while time.time() - begin < 100:
a = 1 * 100
This gives the following error
Traceback (most recent call last):
File “mem_data.py”, line 64, in
mp.spawn(np_before, args=(array, 1), nprocs=2)
File “/home/parawr/.conda/envs/faclab/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method=‘spawn’)
File “/home/parawr/.conda/envs/faclab/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 158, in start_processes
while not context.join():
File “/home/parawr/.conda/envs/faclab/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 119, in join
raise Exception(msg) |
st178558 | Traceback (most recent call last):
File “/home/parawr/.conda/envs/faclab/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 20, in _wrap
fn(i, *args)
File “/home/parawr/Projects/shared/mem_data.py”, line 40, in np_before
for img in dataloader:
File “/home/parawr/.conda/envs/faclab/lib/python3.7/site-packages/torch/utils/data/dataloader.py”, line 279, in iter
return _MultiProcessingDataLoaderIter(self)
File “/home/parawr/.conda/envs/faclab/lib/python3.7/site-packages/torch/utils/data/dataloader.py”, line 719, in init
w.start()
File “/home/parawr/.conda/envs/faclab/lib/python3.7/multiprocessing/process.py”, line 112, in start
self._popen = self._Popen(self)
File “/home/parawr/.conda/envs/faclab/lib/python3.7/multiprocessing/context.py”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “/home/parawr/.conda/envs/faclab/lib/python3.7/multiprocessing/context.py”, line 284, in _Popen
return Popen(process_obj)
File “/home/parawr/.conda/envs/faclab/lib/python3.7/multiprocessing/popen_spawn_posix.py”, line 32, in init
super().init(process_obj)
File “/home/parawr/.conda/envs/faclab/lib/python3.7/multiprocessing/popen_fork.py”, line 20, in init
self._launch(process_obj)
File “/home/parawr/.conda/envs/faclab/lib/python3.7/multiprocessing/popen_spawn_posix.py”, line 47, in _launch
reduction.dump(process_obj, fp)
File “/home/parawr/.conda/envs/faclab/lib/python3.7/multiprocessing/reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
OverflowError: cannot serialize a bytes object larger than 4 GiB |
st178559 | This thing works
def np_after(gpu, array, num_workers):
dataset = SharedDataset1(array)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=10, num_workers=num_workers)
for ii, img in enumerate(dataloader):
print(ii)
c = img+0.1
time.sleep(1)
return 0
class SharedDataset2(Dataset):
def __init__(self, shared_mem):
super(SharedDataset2, self).__init__()
with shared_mem.get_lock():
self.shared_mem = np.ctypeslib.as_array(shared_mem.get_obj())
def __len__(self):
return 10000
def __getitem__(self, idx):
return torch.randn(3, 32, 32)
I think it might have to do with what Python pickles - the array/shared_mem is a SynchronizedArray object which Python knows no to pickle but share, while in the np_before function it is now an np.ndarray which Python tries to pickle.
Let me now try and fix the PIL issue. I will benchmark it to see if it actually speed up data-loading. |
st178560 | I will also have to check validity - I wouldn’t want the workers from the dataloader overwriting each others’ data |
st178561 | The problem source is correct, since the dataset passed to Dataloader will be distributed to workers, and therefore will be pickled. Good work, looking forward to your result. |
st178562 | @mrshenli This question is not that closely related to pytorch, but it is really hard to find related resources on stackoverflow etc. My framework (mentioned in last post) development have finally reached a critical stage, I am testing the distributed part now.
So do you know any python libraries such as DEMi, fuzzowski which can simulate a connection layer and let users send & recv, shift, reorder & delay, poke & inspect messages? Since I need to simulate the RPC layer, these two libraries metioned above are not appropriate for this task. What are the development team of pytorch using to test your rpc service? Any idea would be great.
BTW, my library is hosted at https://github.com/iffiX/machin 2 Come and have a look! the tutorial is not complete though, but api documentation and tests for most core functions should be close to complete now, will release the first milestone soon. |
st178563 | Hey @iffiX, thanks a lot for sharing the exciting project!! We will certainly study and share with other team members.
Regarding tests for delay/drop/retry messages, @osalpekar implemented a faulty agent 1 for that purpose, and the majority of faulty tests can be found here 1.
So do you know any python libraries such as DEMi, fuzzowski which can simulate a connection layer and let users send & recv , shift, reorder & delay , poke & inspect messages? Since I need to simulate the RPC layer, these two libraries metioned above are not appropriate for this task.
If this is just for simulating RPCs, does mock work in this case? Sth like this 1.
Also cc TensorPipe expert @lcw, do you know any good tools for this purpose? |
st178564 | Mock is not sufficient, because I would like to implement tests based on fuzzy testing for machin.parallel.distributed.election.ElectionGroupStableBase and other higher level modules based on this core. Common white box testing and black box testing are not good enough for testing distributed core functions, so my testing idea comes from this github repo and the blog post from its “Fuzzing” section blog.
I think I will try to implement a simple framework myself today, maybe generate fuzzy data with boofuzz and write a rpc simulation layer. I am still looking forward to other ideas of your team! |
st178565 | I’m implementing an algorithm which requires a lot of model evaluations. I want to parallelize model evaluations by using CPU. The code segment I want to parallelize is here (simplified for readability):
def test_img(network, datatest):
network.eval()
data_loader = DataLoader(datatest, batch_size=args.bs)
for idx, (data, target) in enumerate(data_loader):
result = network(data)
And for one evaluation it takes around 1.5s on CPU. The networks for evaluation are different, but the dataset is the same. I tried to use joblib.Parallel to parallelize this process:
results = Parallel(n_jobs=num_cpu, prefer="threads")(delayed(test_img)(network_lst[i], dataset) for i in range(N))
However, it seems that there are no improvement by using this method (it will take 15s for 10 evaluations). I specifically count the runtime for result = network(data) line, and it takes 8 seconds. Therefore I think neither the network evaluation nor the dataloader are parallelized. Are there any way for us to load network and data to a specific CPU core, like data.to('cpu:0')? Is it even possible to use CPU for model evaluation parallelization?
Any suggestions are appreciated! |
st178566 | By default pytorch will use multiple cpu cores to calculate:
import time
import multiprocessing as mp
import torch as t
def subproc():
# keep process showing in "top"
begin = time.time()
while time.time() - begin < 10:
a = t.ones([1000, 1000]) * t.ones([1000, 1000])
if __name__ == "__main__":
p = mp.Process(target=subproc, args=())
p2 = mp.Process(target=subproc, args=())
p.start()
p2.start()
print("started")
p.join()
p2.join()
print("joined")
Seems that you are using “threads”, not good for python, you must use processes:
results = Parallel(n_jobs=num_cpu, prefer="threads")(delayed(test_img)(network_lst[i], dataset) for i in range(N)) |
st178567 | I am reading the DistributedDataparallel tutorial 7. The last line from the following snippet confuses me:
if rank == 0:
# All processes should see same parameters as they all start from same
# random parameters and gradients are synchronized in backward passes.
# Therefore, saving it in one process is sufficient.
torch.save(ddp_model.state_dict(), CHECKPOINT_PATH)
# Use a barrier() to make sure that process 1 loads the model after process
# 0 saves it.
dist.barrier()
# configure map_location properly
map_location = {'cuda:%d' % 0: 'cuda:%d' % rank}
ddp_model.load_state_dict(
torch.load(CHECKPOINT_PATH, map_location=map_location))
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn = nn.MSELoss()
loss_fn(outputs, labels).backward()
optimizer.step()
# Use a barrier() to make sure that all processes have finished reading the
# checkpoint
**dist.barrier()**
If the last line is used to ensure all processes finish reading, why does not it directly follow ddp_model.load_state_dict?
For each iteration, do we need to call dist.barrier()? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.