id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st177668 | Hi,
this question is somewhat in between PyTorch’s implementation of DistributedDataParallel and Paramters Server in general. My “theory” source is the Dive into Deep Learning book [1].
Figure 12.7.1 on [1] suggests the following approach:
Assume batch size 32. If we have one GPU and 128 training datapoints, each epoch has 4 mini batches. After every mini batch, we update our model (i.e. there are 4 updates). Hence, we have to calculate four gradients (one per minibatch).
For the multiple GPUs case, assume two GPUs and 128 training datapoints. We feed each GPU a mini-batch, let them calculate a gradient, sum them and update our model with the sum. Hence, there are two steps instead of four involved (from a BSP point of view).
My questions are the following:
Is my described understanding on how parameter servers work correct? Especially I am not sure if we keep the same batch size 32 per GPU or if we have to divide the batch size by the number of GPUs.
Why do we sum the gradients instead of averaging them? This is even more confusing to me, as in the DistributedDataParallel documentation of PyTorch (cannot post the link due to the link limit for new users), there is the following statement:
When a model is trained on M nodes with batch=N, the gradient will be M times smaller when >compared to the same model trained on a single node with batch=M*N (because the gradients >between different nodes are averaged). You should take this into consideration when you want to >obtain a mathematically equivalent training process compared to the local training counterpart.
There are two things confusing here:
2.1. It states that the gradients are averaged, which does not comply to the D2L book which states that we sum the gradients.
2.2. Until now, I always thought, in mini-batch gradient descent, the loss function (optimization goal) averages the error of the mini-batch data points. Hence, if M nodes run mini-batch gradient descent with batch size N, and we take the average of their gradients, we should receive a number that is in the same order of magnitude as if 1 node runs mini-batch gradient-descent with batch size NM, as the single node averages the error functions of nm data points for the loss function, while with M nodes we just take the average of averages.
I am not sure whether the DistributedDataParallel class of PyTorch can be seen as a parameter server (especially because there even is a guide on how to build a parameter server in PyTorch [3]), but it maps to what is described in the book as a parameter server.
Any help on resolving my confusion is much appreciated. Thank you very much!
Kind regards,
Maximilian
[1] https://d2l.ai/chapter_computational-performance/parameterserver.html 3
[2] https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html 8 |
st177669 | Hi,
I’ve had some discussion with Shen Li as one of the authors of the VLDB DDP paper via mail and we want to move our discussion here to make it accessible to everyone. Find our past communication here:
For the multiple GPUs case, assume two GPUs and 128 training datapoints. We feed each GPU a mini-batch, let them calculate a gradient, sum them and update our model with the sum. Hence, there are two steps instead of four involved (from a BSP point of view).
It depends. If you still wanna maintain batch size at 32 for each iteration, the per process batch size should schrink to (32 / world_size). But this is an application decision, and PyTorch won’t make the call for you.
Now, in the DDP documentation, one can find the following statement: When a model is trained on M nodes with batch=N, the gradient will be M times smaller when compared to the same model trained on a single node with batch=M*N (because the gradients between different nodes are averaged). You should take this into consideration when you want to obtain a mathematically equivalent training process compared to the local training counterpart.
It’s two step computation. Since AllReduce can only do sum/prod/max/min, we cannot directly use AllReduce to do mean. Hence, it uses AllReduce to sum the gradients and divide the gradient by the world_size.
Now, in the DDP documentation, one can find the following statement: When a model is trained on M nodes with batch=N, the gradient will be M times smaller when compared to the same model trained on a single node with batch=M*N (because the gradients between different nodes are averaged). You should take this into consideration when you want to obtain a mathematically equivalent training process compared to the local training counterpart.
I believe the OSS author who added this line is removing it on master:
Because it actually depends on the loss function. Check out this:
My second question is also about the above statement from the DDP documentation. In mini-batch gradient descent, the loss function (optimization goal) averages the error of the mini-batch data points. Hence, if M nodes run mini-batch gradient descent with batch size N, and we take the average of their gradients, we should receive a number that is in the same order of magnitude as if 1 node runs mini-batch gradient-descent with batch size N*M, as the single node averages the error functions of nm data points for the loss function, while with M nodes we just take the average of averages. I do not understand why the documentation says there is a change in order of magnitude. If we would sum instead of average in the DDP module, it would make sense tho.
This is true, but depends on the loss function. Nothing prevents uses from using sum() as the loss function.
Lastly, I would like to ask why you do not consider the DDP module a parameter server. Isn’t it doing exactly what a PS is supposed to do?
These papers can answer that. I just list two, but many other papers also view it this way, i.e., consider AllReduce and PS as difference schemes, which makes sense to me, as one is collective communication, and another is P2P.
You do provide an tutorial on how to implement a PS with the RPC framework, but I do currently not understand why this is necessary. The tutorial states that the DDP module is only to be used for single-node multi-GPU computations, but in the sourcecode of distributed.py is stated that multi-node computations are supported as well.
Can you point me to the line that claims DDP is single-node multi-GPU? This is certainly wrong. DP is single-node multi-GPU, but DDP can run on multi-node. Also, we heavily rely on the community. If you feel anything is wrong in the code or doc. Please feel free to send in PRs to fix them.
My first question is about the gradient reduction. Figure 12.7.1 on
[1] suggests the following approach for gradient reduction:
Assume batch size 32. If we have one GPU and 128 training datapoints,
each epoch has 4 mini batches. After every mini batch, we update our
model (i.e. there are 4 updates). Hence, we have to calculate four
gradients (one per minibatch).
For the multiple GPUs case, assume two GPUs and 128 training datapoints.
We feed each GPU a mini-batch, let them calculate a gradient, sum them
and update our model with the sum. Hence, there are two steps instead of
four involved (from a BSP point of view).
Now, in the DDP documentation, one can find the following statement:
When a model is trained on M nodes with batch=N, the gradient will be M
times smaller when compared to the same model trained on a single node
with batch=M*N (because the gradients between different nodes are
averaged). You should take this into consideration when you want to
obtain a mathematically equivalent training process compared to the
local training counterpart.
The confusing part is that PyTorch averages the gradients. I do not
understand why this is the case, as [1] states we sum the gradients (and
that - in my opinion - makes sense as we want to parallelize the
gradient calculation and after parallel calculation go into the
direction of all gradients together).
My second question is also about the above statement from the DDP
documentation. In mini-batch gradient descent, the loss function
(optimization goal) averages the error of the mini-batch data points.
Hence, if M nodes run mini-batch gradient descent with batch size N, and
we take the average of their gradients, we should receive a number that
is in the same order of magnitude as if 1 node runs mini-batch
gradient-descent with batch size N*M, as the single node averages the
error functions of nm data points for the loss function, while with M
nodes we just take the average of averages. I do not understand why the
documentation says there is a change in order of magnitude. If we would
sum instead of average in the DDP module, it would make sense tho.
Lastly, I would like to ask why you do not consider the DDP module a
parameter server. Isn’t it doing exactly what a PS is supposed to do?
You do provide an tutorial on how to implement a PS with the RPC
framework, but I do currently not understand why this is necessary. The
tutorial states that the DDP module is only to be used for single-node
multi-GPU computations, but in the sourcecode of distributed.py is
stated that multi-node computations are supported as well.
To continue the discussion, I would like to follow up:
It seems like this PR was actually fixing the documentation for DDP: https://github.com/pytorch/pytorch/pull/47156 4
Now it states:
When a model is trained on M nodes with batch=N, the
gradient will be M times smaller when compared to the same model
trained on a single node with batch=M*N if the loss is summed (NOT
averaged as usual) across instances in a batch (because the gradients
between different nodes are averaged).
I do not understand the last part in brackets. I do see (and this is also what I suggested in my initial post) that if we sum the gradients (so just AllReduce with sum), that the resulting gradient will be M times smaller. But I am a little confused on loss vs gradient summing/averaging here. What we do on each node is calculate the loss per mini-batch. This loss per mini batch is the average loss for all data points in the mini batch, see e.g. [1] (e.g. for MSE mse = \frac{1}{mini batch size} \sum_{i=1}^mini batch size (x-y)^2). This results in a gradient per mini-batch (and each node handles a mini-batch, so one gradient per node). Now the question is whether we want to sum or average the gradients of the nodes.
Hence, for me it should be:
When a model is trained on M nodes with batch=N, the
gradient will be M times larger when compared to the same model
trained on a single node with batch=M*N if the gradients are summed across instances in a batch.
Because in the distributed case, you sum M gradients. In the non-distributed case, you just have one gradient which results from the average loss.
Because what we can change is whether we want to AllReduce with a sum and than divide by the total amount of gradients (=mean) or just AllReduce with sum. I do not understand what the loss has to do with that as we deal with gradients on that level.
The second thing I do not understand is why we want to average instead of summing the gradients in the first place. According to the book in my initial post, it should be summing and for me, that’s more intuitive as well. Is there are reason why we average by default?
[1] https://adventuresinmachinelearning.com/stochastic-gradient-descent/ 1 |
st177670 | maxbit:
Because in the distributed case, you sum M gradients. In the non-distributed case, you just have one gradient which results from the average loss.
This is not accurate. In the distributed case, the processes first use AllReduce to sum the gradient and then divide it by world size. So, the gradient is averaged. If you need a sum instead, you can multiple the param.grad field by world_size.
I do not understand what the loss has to do with that as we deal with gradients on that level.
This is mainly for mathematical equivalence, which can have impact on how to tune other params like LR. E.g., if you use MSE as the loss_fn in local training, then the grad you get is per-sample average. When switching to DDP in this case, you might still want grads being per-sample average, and hence it should be AllReduce sum divided by world_size. However, if you use sum() as the loss_fn in local trianing, the grad you get is per-batch sum. And when switching to DDP (per process batch size probably should be N/M in this case), you might also want to keep the grad as per-batch sum. In this case, the grad should be AllReduce sum without division.
The second thing I do not understand is why we want to average instead of summing the gradients in the first place. According to the book in my initial post, it should be summing and for me, that’s more intuitive as well. Is there are reason why we average by default?
That’s not sth that PyTorch Distributed package can decide for you. It is application’s choice. |
st177671 | In the same machine, there are four different models on each gpu.
They are all use same dataset, but i don’t know how to serve it efficiently.
Naively I could make four copies of the dataset, and create four python process to train them.
But Is it the best way? Are all four same dataset have to be duplicated on the memory?.. |
st177672 | Solved by mrshenli in post #2
If those dataset won’t be mutated, then they don’t have to be duplicated.
Naively I could make four copies of the dataset, and create four python process to train them.
But Is it the best way?
This approach has pros and cons. It will certainly use more memory on duplicated dataset and per-proc… |
st177673 | Trump:
Are all four same dataset have to be duplicated on the memory?
If those dataset won’t be mutated, then they don’t have to be duplicated.
Naively I could make four copies of the dataset, and create four python process to train them.
But Is it the best way?
This approach has pros and cons. It will certainly use more memory on duplicated dataset and per-process CUDA context (500MB CUDA memory). But the benefit is there won’t be GIL contentions. Another option could be using multi-thread with multiple CUDA streams. In this way, data set and CUDA context can be shared, and multi-thread will also allow concurrent processing on CUDA devices. But the CPU computations from multiple threads will still compete to grab the GIL. |
st177674 | Hi there,
I’m trying to run this 5 tutorial locally for one parameter server and two workers.
The problem is I’m getting the below error:
Traceback (most recent call last):
File “/usr/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/usr/lib/python3.6/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “rpc_parameter_server.py”, line 228, in run_worker
run_training_loop(rank, num_gpus, train_loader, test_loader)
File “rpc_parameter_server.py”, line 187, in run_training_loop
dist_autograd.backward(cid, [loss])
RuntimeError: Error on Node 0: one of the variables needed for gradient computation has been modified by an inplace operation: [CPUFloatType [32, 1, 3, 3]] is at version 5; expected version 4 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Here’s my torch version if needed:
pip3 freeze | grep torch
torch==1.5.1+cpu
torchtext==0.6.0
torchvision==0.6.1+cpu
Thanks in advance for any advice! |
st177675 | Hey @rvarm1, I wonder if we need a lock in ParameterServer.forward, otherwise if the execution of forward got sliced into multiple pieces, interleaving execution from different RPC threads could mess up the autograd graph state? |
st177676 | I’m getting this, too, but interestingly only when I have > 1 worker node.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "rpc_parameter_server.py", line 224, in run_worker
run_training_loop(rank, num_gpus, train_loader, test_loader)
File "rpc_parameter_server.py", line 183, in run_training_loop
dist_autograd.backward(cid, [loss])
RuntimeError: Error on Node 0: one of the variables needed for gradient computation has been modified by an inplace operation: [CUDAFloatType [128, 10]], which is output 0 of TBackward, is at version 3; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! |
st177677 | I did some looking into this. Adding a lock on ParameterServer.forward has no effect.
def forward(self, inp):
# forward_lock defined globally
with forward_lock:
inp = inp.to(self.input_device)
out = self.model(inp)
# This output is forwarded over RPC, which as of 1.5.0 only accepts CPU tensors.
# Tensors must be moved in and out of GPU memory due to this.
out = out.to("cpu")
return out |
st177678 | Thanks for looking into this! I confirm that I am able to repro the issue, working on root causing it now. |
st177679 | I want to switch from torch DDP to apex amp DDP to make use of mixed precision. I’m training two DDP models so I’m using two process groups to make sure the gradients are synchronized correctly. Is there a way to pass a process group to amp DPP to get the same performance? |
st177680 | Solved by ptrblck in post #2
You don’t need to switch to apex/DDP to use automatic mixed-precision.
We recommend to use the PyTorch implementations of DDP and the native mixed-precision implementation via torch.cuda.amp. |
st177681 | You don’t need to switch to apex/DDP to use automatic mixed-precision.
We recommend to use the PyTorch implementations of DDP and the native mixed-precision implementation via torch.cuda.amp. |
st177682 | Oh, I see. Thanks a lot! Didn’t know about the torch.cuda.amp
It works with the amp and I am able to fit a batch_size twice as big on the GPU now.
One question though, I’m actually experiencing a slowdown with the increase of batch_size
Is this expected behaviour? I was hoping to get the same step execution time, but a bigger batch_size so the epoch gets faster. |
st177683 | It depends a bit where the current bottleneck is.
Using amp should reduce the execution time, if TensorCores can be used. However, increasing the batch size would also increase the time again. The net benefit depends on the achieved speedup through the usage of TensorCores vs. the increased workload.
That being said, increasing the batch size also increases the data loading time, since each worker has to load more samples now and your current setup might face a data loading bottleneck.
You could profile the data loading as shown in the ImageNet example 16 and check, if this time decreases during the training, which would mean that all workers can preload the batches in the background while the GPU is busy with the training. |
st177684 | I did the profiling as you advised, and the dataloader is not the issue here (the time is almost the same)
But I noticed the per step slow down still is a speed up in terms of throughput. So now it takes x1.2 less time to process the same amount of objects than before. So I guess that’s how it’s supposed to be. Thanks a lot for your help! |
st177685 | Hi
I have an iterable dataset and I need to define distributed datasampler for it to train efficiently on TPUs, here is the example distributed sampler for TPUs in case of non-iterable datasets, could you assist me please with providing an example with an iterable dataset (like tf.data.Dataset) which gets convert to an iterable datasets in pytorch and a distributed sampler which can be used on TPUs. thank you.
def get_tpu_sampler(dataset: torch.utils.data.dataset.Dataset):
if xm.xrt_world_size() <= 1:
return RandomSampler(dataset)
return DistributedSampler(dataset, num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal()) |
st177686 | Hi @Rabeeh_Karimi, for all pytorch/xla(TPU) related questions, please open an issue in pytorch/xla github repo instead. https://github.com/pytorch/xla 7 Thanks! |
st177687 | I am trying to train my model on multiple GPUs but I have some trouble with torch.distributions.Laplace that I call in the forward pass.
I have uploaded a minimal working example 6 that runs fine without torch.nn.DataParallel but fails when using it.
Is there any way to make this code run on multiple GPUs? |
st177688 | Solved by Kushaj in post #5
Is there any specific reason for using DataParallel instead of DistributedDataParallel? I have experience with only single GPU machines so I don’t know about the details here. |
st177689 | I wasn’t able to reproduce the error. On which pytorch version are you? I tested on 1.6. |
st177690 | hmm interesting, me too I am on 1.6.0. I checked again and found out that at least two GPUs need to be available to reproduce the error |
st177691 | Is there any specific reason for using DataParallel instead of DistributedDataParallel? I have experience with only single GPU machines so I don’t know about the details here. |
st177692 | no particular reason, I have just seen more examples using DataParallel
But could be worth trying out if things look differently with DistributedDataParallel |
st177693 | DistributedDataParallel seems to work without problems, thanks for the hint
I have uploaded a gist 16 showing how the code now runs with world_size=2 |
st177694 | I have single machine with two GPUs.This errors occurred when I used this command ‘CUDA_VISIBLE_DEVICES=1,0 python -m torch.distributed.launch --nproc_per_node=2 train.py’ train my model parallelly.
Here’s my code, could anyone help me?
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
torch.distributed.init_process_group(backend='nccl')
parser = argparse.ArgumentParser(description='param')
parser.add_argument('--iters', default=10,type=str)
parser.add_argument('--data_size', default=2048,type=int)
parser.add_argument('--batch_size', default=256,type=int)
parser.add_argument('--loss_name', default='KL',type=str)
parser.add_argument('--lr', default=0.01,type=int)
parser.add_argument('--reg_param', default=0.1,type=int)
parser.add_argument('--save_loss_path', default='./',type=str)
parser.add_argument('--use_gpu', type=bool, default=False)
def cleanup():
dist.destroy_process_group()
def train(iters,
data_size,
batch_size,
loss_name,
lr,
reg_param,
save_loss_path,
use_gpu):
save_loss_csv = save_loss_path + loss_name + '.csv'
create_csv_4_KL(path=save_loss_csv)
atlas = np.load(atlas_file)
if use_gpu:
model = Model().to(device)
model = torch.nn.parallel.DistributedDataParallel(model)
else:
model = Model()
opt = Adam(model.parameters(), lr=lr)
if loss_name == 'KL':
from losses import KL_Divergence
loss_fun = KL_Divergence
elif loss_name == 'MSE':
from losses import mse_loss
loss_fun = mse_loss
elif loss_name == 'NCC':
from losses import ncc_loss
loss_fun = ncc_loss
else:
print("There's no such a loss fuction {}".format(loss_name))
import losses
Grad_loss = losses.gradient_loss
train_generator = DataGenerater(json_path=json_path, data_size=data_size)
train_set = DataLoader(train_generator, batch_size=batch_size, shuffle=True, num_workers=16,
sampler=DistributedSampler(train_generator))
reg_param = reg_param
fixed = torch.Tensor(atlas)
fixed.unsqueeze_(0)
fixed.unsqueeze_(0)
if use_gpu:
fixed = fixed.expand(batch_size, 1, 128, 128, 128).cuda()
fixed = fixed.expand(batch_size, 1, 128, 128, 128)
fixed_norm = fixed / 255
if use_gpu:
fixed_norm = fixed_norm.to(device)
for epoch in range(iters):
start_time = time.time()
loss_epoch = 0.0
for i, batch_moving in enumerate(train_set):
if use_gpu:
batch_moving_cuda = batch_moving.cuda()
else:
batch_moving_cuda = batch_moving
batch_moving_cuda_norm = batch_moving_cuda / 255
wrap, flow = model(batch_moving_cuda_norm, fixed_norm)
loss = loss_fun(wrap, fixed_norm) + reg_param * Grad_loss(flow)
loss_epoch += loss.item()
opt.zero_grad()
loss.backward()
opt.step()
append_csv(save_loss_csv,
zip([[epoch + 1]], [loss_epoch]))
end_time = time.time()
loop_cost = end_time - start_time
print("After [ {} ] seconds and {} epoches, selected the {} loss to train, the loss is [ {} ]."
.format(loop_cost, epoch + 1, loss_name, loss_epoch / (2048 / batch_size)))
para_save_file = save_loss_path + 'res/' + 'MyModel-slice-{}-{}-{}-{}.pth'.format(loss_name, iters, reg_param, now)
if os.path.exists(para_save_file):
os.remove(para_save_file)
torch.save(model.state_dict(), para_save_file)
print("The model saved in {}".format(para_save_file))
if __name__ == "__main__":
args = parser.parse_args()
now = datetime.now().date()
json_path = '/home/mamingrui/code/MyModel/brain.json'
atlas_file = '/home/mamingrui/data/atlas/atlas.npy',
# initialize the process group
dist.init_process_group("nccl")
local_rank = torch.distributed.get_rank()
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
train(iters=args.iters,
data_size=args.data_size,
batch_size=args.batch_size,
loss_name=args.loss_name,
lr=args.lr,
reg_param=args.reg_param,
save_loss_path=args.save_loss_path,
use_gpu=args.use_gpu)
cleanup()
The errro report below
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
usage: train.py [-h] [--iters ITERS] [--data_size DATA_SIZE]
[--batch_size BATCH_SIZE] [--loss_name LOSS_NAME] [--lr LR]
[--reg_param REG_PARAM] [--save_loss_path SAVE_LOSS_PATH]
[--use_gpu USE_GPU]
train.py: error: unrecognized arguments: --local_rank=0
usage: train.py [-h] [--iters ITERS] [--data_size DATA_SIZE]
[--batch_size BATCH_SIZE] [--loss_name LOSS_NAME] [--lr LR]
[--reg_param REG_PARAM] [--save_loss_path SAVE_LOSS_PATH]
[--use_gpu USE_GPU]
train.py: error: unrecognized arguments: --local_rank=1
Traceback (most recent call last):
File "/home/mamingrui/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/mamingrui/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/mamingrui/anaconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 253, in <module>
main()
File "/home/mamingrui/anaconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 249, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/mamingrui/anaconda3/bin/python', '-u', 'train.py', '--local_rank=1']' returned non-zero exit status 2. |
st177695 | The launcher will pass a --local_rank arg to your train.py script, so you need to add that to the ArgumentParser.
Besides. you need to pass that rank, and world_size, and init_method (which basically contains MASTER_ADDR and MASTER_PORT) to dist.init_process_group either through arguments or env vars.
This example might be helpful: https://github.com/pytorch/examples/pull/743 242 |
st177696 | I also met this problem, but I didn’t understand the answer upstairs. How did you solve this problem?
thank you |
st177697 | The the error mentioned in the original post basically means that the launcher script tries to pass --local_rank=1 as an argument to your script (i.e., train.py in this case). However, train.py is not configured to accept that argument.
train.py: error: unrecognized arguments: --local_rank=1
To solve this issue, you can add the following to your ArgumentParser.
parser.add_argument("--local_rank", type=int, default=0) |
st177698 | thanks.but after i add parser.add_argument("–local_rank", type=int, default=0),this errors also occurred. |
st177699 | Today I want to use DISTRIBUTED COMMUNICATION PACKAGE to train the imagenet, however I found there are many processes have been created on the node.
script.py:
def main():
...
model.to(device)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=None, output_device=None, find_unused_parameters=True)
...
if __name__ == '__main__':
print('Use {back} as backend.'.format(back=args.backend))
dist.init_process_group(backend=args.backend, init_method='env://', timeout=datetime.timedelta(seconds=1000))
main()
bash.sh:
CUDA_VISIBLE_DEVICES=0,1,2,3
export GLOO_SOCKET_IFNAME=ib0
export OMP_NUM_THREADS=24
NPROC_PER_NODE=4
SLURM_JOB_NUM_NODES=4
...
COMMAND="script.py -a inception_v3 --print-freq 1000 --backend gloo --nproc-per-node 4 --pretrained --multiprocessing-distributed $HOME/ImageNet"
python -m torch.distributed.launch \
--nproc_per_node=$NPROC_PER_NODE \
--nnodes=$SLURM_JOB_NUM_NODES \
--node_rank=$SLURM_NODEID \
--master_addr=$MIP \
--master_port=$MPORT \
$COMMAND > $HOME/thesis/PCL/log/"log_v1_inception_"${SLURM_JOB_ID}"_"${SLURM_NODEID}".out"
I have trained it in 4 nodes and there are 4 gpus on each node.
I log in the node0 then I found there are 16 processes in this node
Is it abnormal? Or DDP would create nproc_per_nodee * node_num processes on each node? |
st177700 | Hey @khalil, sorry about the delay. Looking at the screenshot, they are actually the same set of processes, but each process used all GPUs. Any reason for setting device_ids to None, instead of device_ids=[local_rank]? If it is None, by default, DDP will use all visible CUDA devices. |
st177701 | Hi
I am running a small model with small batch size on TPU with 8 cores, and when it calls
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
It always goes to out of memory on google cloud, and I am not sure where is the issue, I tried to reduce the batch size as much as possible still the error is there. thanks |
st177702 | Hi,
Could you provide more details on your setup as it is hard to reproduce it ASIS. CC: @ailzhang on XLA. |
st177703 | Hi there!
This is a question that might sound simple, but I can’t find any information that actually works when I implement it.
So, I have to models (net1 and net2), and I have two GPUs (“cuda:0” and “cuda:1”). I have the training and test dataloaders (train_dl, test_dl). I just want to train both models simultaneously, like:
net1.to(decive1)
net2.to(device2)
accuracy1 = training(net1)
accuracy2 = training(net2)
How do I do this? How do I make sure these last two functions don’t execute sequentially? I want them to execute at the same time. I have read about something related to “spawn” but I have really no idea how to implement that.
Thank you! |
st177704 | Hi,
If net1 and net2 do not depend on each other you can just spawn 2 processes and train each model in its own process. You can do
import torch.multiprocessing as mp
def your_func(int rank):
if rank == 0:
train net1
else:
train net2
...
mp.spawn(
your_func,
args=(),
nprocs=2,
join=True)
Would that work for you? |
st177705 | Thank you very much! This is exactly what I needed. I have only one more question:
If I wanted to return something from my function (such as the accuracy value), what would be the best way to do so? I was thinking about having a global list a updating that list inside the function, but this might be a little bit odd probably.
Thanks again!! |
st177706 | A global list may be tricky since each process would have its own copy of the global list. Overall you would need to communicate the accuracy across processes to aggregate them, you can look into python multiprocessing such as mp.Manager(): https://docs.python.org/3/library/multiprocessing.html. |
st177707 | Hi, I’m trying to use DistributedDataParallel to enable training on multiple GPUs on a single node. According to what I read from online documents, the procedure is the following (please correct me if it’s not true):
dist.init_process_group(backend=backend, init_method=init_method, rank=rank, world_size=world_size)
model = DistributedDataParallel(model, device_ids=[local_rank])
And then spawn the process in the main function:
multiprocessing.spawn(fn, args=args, nprocs=world_size, join=True)
I have two questions regarding this:
What should I set these parameters to? e.g., backend, init_method, rank, world_size… I didn’t find an example showing details of setting these parameters.
I saw some examples online used DistributedSampler. Is this necessary for using DistributedDataParallel?
Thanks!! |
st177708 | Hi,
You can take a look at https://pytorch.org/tutorials/intermediate/ddp_tutorial.html 3
Your params: backend is a type of communication between nodes (depends on your hardware and setup), e.g. backend = ‘nccl’; local rank is given to your function (fn) by multiprocessing.spawn(); rank is computed based on node rank and local rank; world_size is how many ranks you want to run in total (typically = number of nodes * number of gpus in a node).
DistributedSampler is useful for loading data across the ranks you’ve created by multiprocessing.spawn(), so they don’t share the same pieces of data in training. |
st177709 | Hi, thanks for your reply! I’m still a bit confused about the concept rank. Does it correspond to the number of gpus available?
For example, if I’m using 1 node with 4 gpus to train, how should I set those parameters? |
st177710 | you can think of rank as an ID of a process controlling one (preferred) or multiple GPUs.
local rank is a local ID of such process.
if you have 1 node with 4GPUs we prefer that you use DDP with 4 ranks (processes) |
st177711 | Hi there,
I’m currently trying to run a demo of a PyTorch model trained with 2 nodes, where each node contains 2 GPUs. It is based on the tutorial 8 and I’m using Openmpi to handle the communication. Furthermore, the backend for torch.distributed.init_process_group is ‘mpi’ (I followed the tutorials provided to build Pytorch from source. I am forced to used the ‘mpi’ backend and Openmpi since these are the only compatible options available in the cluster I’ve access to.
Here is the main function where I use the mpi4py library to establish the connection between Openmpi and Pytorch:
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
from mpi4py import MPI
def example(local_offset, global_rank, world_size, hostname):
rank=local_offset
if global_rank==1:
rank += 2
print(f'checkpoint{1} with rank {rank} and world_size {world_size}')
dist.init_process_group(backend="mpi", rank=rank, world_size=world_size)
print(f'checkpoint{2}')
model = nn.Linear(10, 10).to(rank)
print(f'checkpoint{3}')
ddp_model = DDP(model, device_ids=[rank])#,output_device=rank)
print(f'checkpoint{4}')
loss_fn = nn.MSELoss()
print(f'checkpoint{5}')
optimizer = optimizer.SGD(ddp_model.parameters(), lr=0.001)
print(f'checkpoint{6}')
outputs = ddp_model(torch.randn(20, 10).to(rank))
print(f'checkpoint{7}')
labels = torch.randn(20, 10).to(rank)
print(f'checkpoint{8}')
loss_fn(outputs, labels).backward()
print(f'checkpoint{9}')
optimizer.step()
def main():
comm = MPI.COMM_WORLD
world_size = comm.Get_size()
world_rank = comm.Get_rank()
hostname = MPI.Get_processor_name()
print(f"\nI am {world_rank} of {world_size} in {hostname}")
curr_env = os.environ.copy()
curr_env['MASTER_ADDR'] = 'hostname_addr'
curr_env['MASTER_PORT'] = '12345'
curr_env['WORLD_SIZE'] = str(world_size*2)
mp.spawn(example,
args=(world_rank, world_size*2,hostname,),
nprocs=world_size,
join=True)
if __name__=="__main__":
main()
Currently, I get the following exception -
I am 1 of 2 in node050
I am 0 of 2 in node049
checkpoint1 with rank 1 and world_size 4
checkpoint1 with rank 0 and world_size 4
checkpoint1 with rank 2 and world_size 4
checkpoint1 with rank 3 and world_size 4
--------------------------------------------------------------------------
Open MPI has detected that this process has attempted to initialize
MPI (via MPI_INIT or MPI_INIT_THREAD) more than once. This is
erroneous.
--------------------------------------------------------------------------
[node049:12317] *** An error occurred in MPI_Init_thread
[node049:12317] *** reported by process [3237216257,0]
[node049:12317] *** on a NULL communicator
[node049:12317] *** Unknown error
[node049:12317] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[node049:12317] *** and potentially your MPI job)
Traceback (most recent call last):
File "/tutorial_prof_students/distrib_data_parallel.py", line 116, in <module>
main()
File "/tutorial_prof_students/distrib_data_parallel.py", line 113, in main
join=True)
File "/pyvenv/condaMPI2/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 247, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/pyvenv/condaMPI2/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 205, in start_processes
while not context.join():
File "/pyvenv/condaMPI2/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 160, in join
exit_code=exitcode
torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with exit code 1
[fs0:23200] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init: invoked multiple times
[fs0:23200] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[fs0:23200] 3 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal unknown handle
I’ve tried several recommendations on other discussions like
here 2 but it didn’t work.
What am I doing incorrectly?
Any tips and help would be great!
Thanks |
st177712 | Hi
Some questions:
could you provide code of your example() that contains torch.distributed.init_process_group
what is world_size in your example? (seems like on each node you should spawn number of processes = number of GPUs = 2)
could you add logging to before and after torch.distributed.init_process_group and see if process group is being inited correctly |
st177713 | Yes, I’ve updated the code with the example function and some loggings. The issue is at the torch.distributed.init_process_group as the error log shows.
Regarding the world_size, I specify 4 processes, where each node contains 2. |
st177714 | I see, the problem is with MPI initialization. Could you try instructions here:
https://pytorch.org/tutorials/intermediate/dist_tuto.html 8 (under MPI Backend).
I.e.
1. Replace the content under `if __name__ == '__main__':` with `init_process(0, 0, run, backend='mpi')`.
2. Run `mpirun -n 4 python myscript.py`. |
st177715 | Yes, I’ve updated the code accordingly and ran with 2 GPUs (1 per node) but now I get a similar but less descriptive error:
checkpoint1 with rank 0 and world_size 0
checkpoint1 with rank 0 and world_size 0
--------------------------------------------------------------------------
Open MPI has detected that this process has attempted to initialize
MPI (via MPI_INIT or MPI_INIT_THREAD) more than once. This is
erroneous.
--------------------------------------------------------------------------
[node024:18276] *** An error occurred in MPI_Init_thread
[node024:18276] *** reported by process [3619422209,0]
[node024:18276] *** on a NULL communicator
[node024:18276] *** Unknown error
[node024:18276] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[node024:18276] *** and potentially your MPI job)
[fs0:19944] 1 more process has sent help message help-mpi-runtime.txt / mpi_init: invoked multiple times
[fs0:19944] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[fs0:19944] 1 more process has sent help message help-mpi-errors.txt / mpi_errors_are_fatal unknown handle
Note that the logging outputted is before the dist.init_process_group
If I perfom the basic checks in both nodes I get the expected
1.8.0a0+262bd64
1.8.0a0+262bd64
Hello from P0 on node024 out of 20
<torch.cuda.device object at 0x2aaafd2efc10>
1
GeForce GTX TITAN X
True
0
-------------------------
Hello from P1 on node025 out of 20
<torch.cuda.device object at 0x2aaafd2efd50>
1
GeForce GTX TITAN X
True
0
------------------------- |
st177716 | I am trying to run my model using DDP on a single node with 3 GPUs. I only intend to use two GPUs so I used os.environ["CUDA_VISIBLE_DEVICES"]="1,2".
I started running the code with only one process
python -m torch.distributed.launch --nproc_per_node=1 train.py
The code runs but when i check nvidia-smi i see a processes running on two gpus.
Process 62966 corresponds to the DDP. Ignore process 2815. I did not understand why the process is running on GPU 2 but without using any resource. The code seems to run alright in this case.
When i run the command with two processes
python -m torch.distributed.launch --nproc_per_node=2 train.py
I see two processes are created but both the process run on both the gpus. The code does not run and stuck at torch.distributed.barrier().
my code for init_process_group:
def init_distributed_mode(args):
args.rank = int(os.environ["RANK"])
args.world_size = int(os.environ['WORLD_SIZE'])
args.gpu = int(os.environ['LOCAL_RANK'])
args.distributed = True
torch.cuda.device(args.gpu)
args.dist_backend = 'nccl'
torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url,world_size=args.world_size, rank=args.rank)
torch.distributed.barrier()
How do I limit one process to one gpu and make the code not stuck at torch distributed barrier? |
st177717 | Solved by agolynski in post #2
Hi,
Could you try torch.cuda.set_device() instead, torch.cuda.device is a context manager, also see https://github.com/pytorch/pytorch/issues/1608 |
st177718 | Hi,
Could you try torch.cuda.set_device() instead, torch.cuda.device is a context manager, also see https://github.com/pytorch/pytorch/issues/1608 5 |
st177719 | Thanks, the code started running, no longer stuck at the distributed barrier but the nvidia-smi output still doesn’t make sense to me. Two processes are running on both the GPUs.
Is this the expected behavior or something’s wrong? |
st177720 | How do you initialize DDP, do you provide the correct device to it? e.g.
ddp_model = DDP(model, device_ids=[rank]) |
st177721 | args.gpu = int(os.environ['LOCAL_RANK'])
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) |
st177722 | I just run a toy example on my machine with 2 processes and i got:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 882788 C ...cal/miniconda3/envs/pytorch3/bin/python 945MiB |
| 1 882789 C ...cal/miniconda3/envs/pytorch3/bin/python 945MiB |
+-----------------------------------------------------------------------------+ |
st177723 | This is a simple model based on model parallelism which runs on gpu’s 0 and 1. How to save the model after training and load it back so that I can test my model on cpu.
class Net(nn.Module):
def __init__(self, gpu0, gpu1):
super(Net, self).__init__()
self.gpu0 = gpu0
self.gpu1 = gpu1
self.conv = nn.Sequential( nn.Conv2d(1, 32, 3, 1), nn.ReLU(), nn.Conv2d(32, 64, 3, 1), nn.ReLU(),
nn.MaxPool2d(2), nn.Dropout(0.25), nn.Flatten(1),
).cuda(gpu0)
self.feat = nn.Sequential(nn.Linear(9216, 128), nn.BatchNorm1d(128),
nn.ReLU(), nn.Dropout2d(0.5), nn.Linear(128, 10)
).cuda(gpu1)
def forward(self, x):
x = self.conv(x).cuda(self.gpu1)
x = self.feat(x)
output = F.log_softmax(x, dim=1)
return output |
st177724 | You could push all parameters and buffers back to the CPU and store the state_dict:
# after training
model.cpu()
torch.save(model.state_dict(), 'filename.pth')
In another script you would be able to create the model instance, load the state_dict, and perform the inference.
However, unfortunately you have defined the cuda() calls explicitly in your model, so that creating the model instance would always try to push the model to the GPU(s).
I would generally recommend to try to write device-agnostic code via to() and pass the device into the __init__ instead of the GPU id. |
st177725 | @ptrblck Model is saved as DDP module and I get this error
if (gpu0 == 0 and epoch == 4):
model.cpu()
save_checkpoint({
'epoch': epoch + 1,
'state_dict': model.state_dict(),
'optimizer' : optimizer.state_dict(),
}, epoch)
class Net(nn.Module):
def __init__(self, gpu0, gpu1):
super(Net, self).__init__()
if gpu0 != "cpu":
self.gpu0 = "cuda:"+str(gpu0)
self.gpu1 = "cuda:"+str(gpu1)
else:
self.gpu0 = "cpu"
self.gpu1 = "cpu"
self.conv = nn.Sequential( nn.Conv2d(1, 32, 3, 1), nn.ReLU(), nn.Conv2d(32, 64, 3, 1), nn.ReLU(),
nn.MaxPool2d(2), nn.Dropout(0.25), nn.Flatten(1),
).to(self.gpu0)
self.feat = nn.Sequential(nn.Linear(9216, 128), nn.BatchNorm1d(128),
nn.ReLU(), nn.Dropout2d(0.5), nn.Linear(128, 10)
).to(self.gpu1)
def forward(self, x):
x = self.conv(x).to(self.gpu1)
x = self.feat(x)
output = F.log_softmax(x, dim=1)
return output
model = Net("cpu", "cpu")
PATH = '../model_4.pth'
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['state_dict'])
Error
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-ec88a5c31813> in <module>
2 PATH = '../model_4.pth'
3 checkpoint = torch.load(PATH)
----> 4 model.load_state_dict(checkpoint['state_dict'])
5 #optimizer.load_state_dict(checkpoint['optimizer'])
~/.conda/envs/praveen_tf/lib/python3.6/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
1043 if len(error_msgs) > 0:
1044 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
-> 1045 self.__class__.__name__, "\n\t".join(error_msgs)))
1046 return _IncompatibleKeys(missing_keys, unexpected_keys)
1047
RuntimeError: Error(s) in loading state_dict for Net:
Missing key(s) in state_dict: "conv.0.weight", "conv.0.bias", "conv.2.weight", "conv.2.bias", "feat.0.weight", "feat.0.bias", "feat.1.weight", "feat.1.bias", "feat.1.running_mean", "feat.1.running_var", "feat.4.weight", "feat.4.bias".
Unexpected key(s) in state_dict: "module.conv.0.weight", "module.conv.0.bias", "module.conv.2.weight", "module.conv.2.bias", "module.feat.0.weight", "module.feat.0.bias", "module.feat.1.weight", "module.feat.1.bias", "module.feat.1.running_mean", "module.feat.1.running_var", "module.feat.1.num_batches_tracked", "module.feat.4.weight", "module.feat.4.bias". |
st177726 | PATH = '../model_4.pth'
checkpoint = torch.load(PATH)
state_dict = checkpoint['state_dict']
for k, v in state_dict.items():
print(k, v.get_device())
Output:
module.conv.0.weight -1
module.conv.0.bias -1
module.conv.2.weight -1
module.conv.2.bias -1
module.feat.0.weight -1
module.feat.0.bias -1
module.feat.1.weight -1
module.feat.1.bias -1
module.feat.1.running_mean -1
module.feat.1.running_var -1
module.feat.1.num_batches_tracked -1
module.feat.4.weight -1
module.feat.4.bias -1
model = Net("cpu", "cpu")
model_dict = model.state_dict()
for k, v in model_dict.items():
print(k, v.get_device())
Output:
conv.0.weight -1
conv.0.bias -1
conv.2.weight -1
conv.2.bias -1
feat.0.weight -1
feat.0.bias -1
feat.1.weight -1
feat.1.bias -1
feat.1.running_mean -1
feat.1.running_var -1
feat.1.num_batches_tracked -1
feat.4.weight -1
feat.4.bias -1 |
st177727 | Solved the problem by doing this @ptrblck Thank you for all the help
state_dict = checkpoint['state_dict']
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:] # remove `module.`
new_state_dict[name] = v |
st177728 | When I use DDP package to train imagenet, there are always OOM problem.
I check the GPU utilization and I found there are many processes on each GPU ?
What is the reason and how can I avoid this problem ? |
st177729 | Solved by ptrblck in post #2
This shouldn’t happen and each process should use one GPU and thus create one CUDA context.
Are you calling CUDA operations on all devices in the script or did you write device-agnostic code, which only uses a single GPU? |
st177730 | This shouldn’t happen and each process should use one GPU and thus create one CUDA context.
Are you calling CUDA operations on all devices in the script or did you write device-agnostic code, which only uses a single GPU? |
st177731 | Thanks for your reply!
I have four gpu in my node and I do training in one node. The previous problem was that I set the device incorrectly, I set device = torch.device('cuda') instead of device = torch.device('cuda:{}'.format(args.local_rank)).
Now the gpu0 has 4 processes and the other has only one process
image669×554 10.2 KB
This is my script:
import argparse
import os
import random
import shutil
import time
import warnings
import datetime
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.distributed as dist
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
import Inceptionv3Net as Incepv3
model_names = sorted(name for name in models.__dict__
if name.islower() and not name.startswith("__")
and callable(models.__dict__[name]))
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
parser.add_argument('data', metavar='DIR',
help='path to dataset')
parser.add_argument('-a', '--arch', metavar='ARCH', default='resnet18',
choices=model_names,
help='model architecture: ' +
' | '.join(model_names) +
' (default: resnet18)')
parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('--epochs', default=90, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
help='manual epoch number (useful on restarts)')
parser.add_argument('-b', '--batch-size', default=96, type=int,
metavar='N',
help='mini-batch size (default: 256), this is the total '
'batch size of all GPUs on the current node when '
'using Data Parallel or Distributed Data Parallel')
parser.add_argument('--lr', '--learning-rate', default=0.01, type=float,
metavar='LR', help='initial learning rate', dest='lr')
parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
help='momentum')
parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float,
metavar='W', help='weight decay (default: 1e-4)',
dest='weight_decay')
parser.add_argument('-p', '--print-freq', default=10, type=int,
metavar='N', help='print frequency (default: 10)')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='path to latest checkpoint (default: none)')
parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true',
help='evaluate model on validation set')
parser.add_argument('--pretrained', dest='pretrained', action='store_true',
help='use pre-trained model')
parser.add_argument('--world-size', default=-1, type=int,
help='number of nodes for distributed training')
parser.add_argument('--rank', default=-1, type=int,
help='node rank for distributed training')
parser.add_argument('--dist-url', default='tcp://224.66.41.62:23456', type=str,
help='url used to set up distributed training')
parser.add_argument('--backend', default='gloo', type=str,
help='distributed backend')
parser.add_argument('--seed', default=None, type=int,
help='seed for initializing training. ')
parser.add_argument('--gpu', default=None, type=int,
help='GPU id to use.')
parser.add_argument('--multiprocessing-distributed', action='store_true',
help='Use multi-processing distributed training to launch '
'N processes per node, which has N GPUs. This is the '
'fastest way to use PyTorch for either single node or '
'multi node data parallel training')
parser.add_argument("--local_rank", type=int)
parser.add_argument('--no-cuda',action='store_true',default=False,help='disable cuda')
parser.add_argument('--nproc-per-node',default=4,type=int, help='nproc_per_node')
global best_acc1, args
best_acc1 = 0
args = parser.parse_args()
def main():
global best_acc1, args
local_rank = args.local_rank
if args.seed is not None:
random.seed(args.seed)
torch.manual_seed(args.seed)
cudnn.deterministic = True
warnings.warn('You have chosen to seed training. '
'This will turn on the CUDNN deterministic setting, '
'which can slow down your training considerably! '
'You may see unexpected behavior when restarting '
'from checkpoints.')
use_cuda = not args.no_cuda and torch.cuda.is_available()
gpu = "cuda:{}".format(args.local_rank)
device = torch.device(gpu if use_cuda else "cpu")
print("=> using",device)
print("From Node:",torch.distributed.get_rank(),"The Local Rank:",local_rank,'\n')
# create model
if args.pretrained:
print("=> using pre-trained model '{}'".format(args.arch))
model = models.__dict__[args.arch]()
if args.gpu is None:
pre = torch.load('./pretrained/inception_v3_google-1a9a5a14.pth', map_location=lambda storage, loc: storage)
else:
loc = 'cuda:{}'.format(args.gpu)
pre = torch.load('./pretrained/inception_v3_google-1a9a5a14.pth', map_location=loc)
model.load_state_dict(pre)
#model.aux_logits = False
#model = Incepv3.InceptionV3Net(pretrained=args.pretrained)
else:
print("=> creating model '{}'".format(args.arch))
model = models.__dict__[args.arch]()
#model.aux_logits = False
#model = Incepv3.InceptionV3Net(pretrained=args.pretrained)
# optionally resume from a checkpoint
if args.resume:
if os.path.isfile(args.resume):
print("=> loading checkpoint '{}'".format(args.resume))
if args.gpu is None:
checkpoint = torch.load(args.resume)
else:
# Map model to be loaded to specified single gpu.
loc = 'cuda:{}'.format(args.gpu)
checkpoint = torch.load(args.resume, map_location=loc)
args.start_epoch = checkpoint['epoch']
best_acc1 = checkpoint['best_acc1']
if args.gpu is not None:
# best_acc1 may be from a checkpoint from a different GPU
best_acc1 = best_acc1.to(args.gpu)
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.resume, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'".format(args.resume))
cudnn.benchmark = True
# Load model
model.to(device)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], find_unused_parameters=True)
criterion = nn.CrossEntropyLoss().to(device)
#ptimizer = torch.optim.SGD(model.parameters(), args.lr,momentum=args.momentum,weight_decay=args.weight_decay)
optimizer = torch.optim.RMSprop(model.parameters(), lr=args.lr, alpha=0.9, momentum=args.momentum, eps=1.0, weight_decay=args.weight_decay)
# Data loading code
input_size = 299
traindir = os.path.join(args.data, 'train')
valdir = os.path.join(args.data, 'val')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_dataset = datasets.ImageFolder(
traindir,
transforms.Compose([
transforms.RandomResizedCrop(input_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
]))
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None),
num_workers=args.workers, pin_memory=True, sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(valdir, transforms.Compose([
transforms.Resize(input_size),
transforms.CenterCrop(input_size),
transforms.ToTensor(),
normalize,
])),
batch_size=args.batch_size, shuffle=False,
num_workers=args.workers, pin_memory=True)
for epoch in range(args.start_epoch, args.epochs):
train_sampler.set_epoch(epoch)
adjust_learning_rate(optimizer, epoch, args)
# train for one epoch
train(train_loader, model, criterion, optimizer, epoch, args, device, True)
# evaluate on validation set
acc1 = validate(val_loader, model, criterion, args, device, False)
# remember best acc@1 and save checkpoint
is_best = acc1 > best_acc1
best_acc1 = max(acc1, best_acc1)
if not args.multiprocessing_distributed or (args.multiprocessing_distributed
and args.rank % args.nproc_per_node == 0):
save_checkpoint({
'epoch': epoch + 1,
'arch': args.arch,
'state_dict': model.state_dict(),
'best_acc1': best_acc1,
'optimizer' : optimizer.state_dict(),
}, is_best)
def train(train_loader, model, criterion, optimizer, epoch, args, device, is_inception=False):
batch_time = AverageMeter('Time', ':6.3f')
data_time = AverageMeter('Data', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
progress = ProgressMeter(
len(train_loader),
[batch_time, data_time, losses, top1, top5],
prefix="Epoch: [{}]".format(epoch))
# switch to train mode
model.train()
end = time.time()
for i, (images, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
images, target = images.to(device, non_blocking=True), target.to(device, non_blocking=True)
# compute output
if is_inception:
output, aux_output = model(images)
loss1 = criterion(output,target)
loss2 = criterion(aux_output,target)
loss = loss1 + 0.4 * loss2
else:
output = model(images)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), images.size(0))
top1.update(acc1[0], images.size(0))
top5.update(acc5[0], images.size(0))
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
progress.display(i)
def validate(val_loader, model, criterion, args, device, is_inception=False):
batch_time = AverageMeter('Time', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
progress = ProgressMeter(
len(val_loader),
[batch_time, losses, top1, top5],
prefix='Test: ')
# switch to evaluate mode
model.eval()
with torch.no_grad():
end = time.time()
for i, (images, target) in enumerate(val_loader):
images, target = images.to(device, non_blocking=True), target.to(device, non_blocking=True)
# compute output
if is_inception:
output, aux_output = model(images)
loss1 = criterion(output, target)
loss2 = criterion(aux_output, target)
loss = loss1 + 0.4 * loss2
else:
output = model(images)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), images.size(0))
top1.update(acc1[0], images.size(0))
top5.update(acc5[0], images.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
progress.display(i)
# TODO: this should also be done with the ProgressMeter
print(' * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
.format(top1=top1, top5=top5))
return top1.avg
def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'):
torch.save(state, filename)
if is_best:
shutil.copyfile(filename, 'model_best.pth.tar')
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self, name, fmt=':f'):
self.name = name
self.fmt = fmt
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def __str__(self):
fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
return fmtstr.format(**self.__dict__)
class ProgressMeter(object):
def __init__(self, num_batches, meters, prefix=""):
self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
self.meters = meters
self.prefix = prefix
def display(self, batch):
entries = [self.prefix + self.batch_fmtstr.format(batch)]
entries += [str(meter) for meter in self.meters]
print('\t'.join(entries))
def _get_batch_fmtstr(self, num_batches):
num_digits = len(str(num_batches // 1))
fmt = '{:' + str(num_digits) + 'd}'
return '[' + fmt + '/' + fmt.format(num_batches) + ']'
def adjust_learning_rate(optimizer, epoch, args):
"""Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
lr = args.lr * (0.1 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def accuracy(output, target, topk=(1,)):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
res.append(correct_k.mul_(100.0 / batch_size))
return res
if __name__ == '__main__':
print('Use {back} as backend.'.format(back=args.backend))
dist.init_process_group(backend=args.backend, init_method='env://', timeout=datetime.timedelta(seconds=1000))
main() |
st177732 | I am doing 2 forward passes on a resnet and trying to compute the gradients using the outputs from the first forward pass. When using multiple GPUs this works when the model is wrapped through nn.DataParallel but not when wrapped through nn.DistributedDataParallel. Below you can find my code.
import torch
import torchvision
import torch.backends.cudnn as cudnn
import torchvision.models as models
import utils.distributed as dist
def main():
# Get the current device as set for current distributed process.
# Check `launch` function in `utils.distributed` module.
device = torch.cuda.current_device()
# create model
model = models.resnet50().cuda(device)
batch_size = 32
# define loss function (criterion) and optimizer
criterion = torch.nn.CrossEntropyLoss().to(device)
cudnn.benchmark = True
# Wrap model in DDP if using more than one processes.
if dist.get_world_size() > 1:
dist.synchronize()
model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[device], find_unused_parameters=True
)
# Using DataParallel works fine
# model = torch.nn.DataParallel(
# model, device_ids=[device]
# )
ip_1 = torch.rand(batch_size,3,224,224).cuda(device)
op_1 = model(ip_1)
target_1 = torch.zeros(batch_size, dtype=torch.long).cuda(device)
ip_2 = torch.rand(batch_size,3,224,224).cuda(device)
op_2 = model(ip_1)
target_2 = torch.zeros(batch_size, dtype=torch.long).cuda(device)
# loss for the first example
loss = criterion(op_1,target_1)
loss.backward() #----------> Fails here when DDP is used
if __name__ == "__main__":
dist.launch(
main,
num_machines=1,
num_gpus_per_machine=2,
machine_rank=0,
dist_url='tcp://localhost:10001',
dist_backend='nccl'
)
The error I get is
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2048]] is at version 4; expected version 3 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
In case you need to look at distributed.py, here 6 it is. My torch version is '1.7.0a0+8deb4fe'
Thanks in advance for your help |
st177733 | @ramprasaath This looks like a bug, could you please file an issue at https://github.com/pytorch/pytorch 20? |
st177734 | I build torch-1.6.0a0+b31f58d-cp36-cp36m-linux_x86_64.whl on my Tesla V100 machine, when I install the wheel package on my Tesla T4 machine, I met the warning like this:
Tesla T4 with CUDA capability sm_75 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_70.
I wonder how should I build pytorch-gpu for different gpu archs?
Github issue here 5. |
st177735 | Solved by ptrblck in post #2
I’ve answered in the GitHub issue. |
st177736 | To replicate, change only def demo_basic(rank, world_size) in https://pytorch.org/tutorials/intermediate/ddp_tutorial.html 3 to the following:
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=1)
optimizer.zero_grad()
outputs = {}
outputs['0'] = ddp_model(torch.rand(20, 10))
outputs['1'] = ddp_model(torch.rand(20, 10))
outputs['2'] = ddp_model(torch.rand(20, 10))
labels = torch.rand(20, 5).to(rank)
for i in range(3):
print(f"before {i}, rank: {rank}, weight: {ddp_model.module.net1.weight[0][0]}")
if i < 2:
loss_fn(outputs[str(i)], labels).backward(retain_graph=True)
else:
loss_fn(outputs[str(i)], labels).backward()
print(f"after {i}, rank: {rank}, weight: {ddp_model.module.net1.weight[0][0]}, grad: {ddp_model.module.net1.weight.grad[0][0]}")
optimizer.step()
print(f"last, rank: {rank}, weight: {ddp_model.module.net1.weight[0][0]}, grad: {ddp_model.module.net1.weight.grad[0][0]}")
cleanup()
and the output is:
before 0, rank: 0, weight: 0.1450435221195221
before 0, rank: 3, weight: 0.1450435221195221
before 0, rank: 1, weight: 0.1450435221195221
before 0, rank: 2, weight: 0.1450435221195221
after 0, rank: 0, weight: 0.1450435221195221, grad: -0.018003715202212334
before 1, rank: 0, weight: 0.1450435221195221
after 0, rank: 3, weight: 0.1450435221195221, grad: -0.018003715202212334
after 0, rank: 1, weight: 0.1450435221195221, grad: -0.018003715202212334
after 0, rank: 2, weight: 0.1450435221195221, grad: -0.018003715202212334
before 1, rank: 1, weight: 0.1450435221195221
before 1, rank: 3, weight: 0.1450435221195221
before 1, rank: 2, weight: 0.1450435221195221
after 1, rank: 0, weight: 0.1450435221195221, grad: -0.03955963999032974
after 1, rank: 3, weight: 0.1450435221195221, grad: -0.03072114661335945
before 2, rank: 0, weight: 0.1450435221195221
before 2, rank: 3, weight: 0.1450435221195221
after 1, rank: 1, weight: 0.1450435221195221, grad: -0.03775426745414734
before 2, rank: 1, weight: 0.1450435221195221
after 1, rank: 2, weight: 0.1450435221195221, grad: -0.03235533833503723
before 2, rank: 2, weight: 0.1450435221195221
after 2, rank: 0, weight: 0.1450435221195221, grad: -0.06408560276031494
after 2, rank: 3, weight: 0.1450435221195221, grad: -0.04222358390688896
after 2, rank: 1, weight: 0.1450435221195221, grad: -0.056242190301418304
last, rank: 0, weight: 0.20912912487983704, grad: -0.06408560276031494
last, rank: 3, weight: 0.18726710975170135, grad: -0.04222358390688896
last, rank: 1, weight: 0.201285719871521, grad: -0.056242190301418304
after 2, rank: 2, weight: 0.1450435221195221, grad: -0.04413666948676109
last, rank: 2, weight: 0.1891801953315735, grad: -0.04413666948676109
Weights and grads do not seem to be synchronized. |
st177737 | Hey @zzzf,
Does this problem persistent, if you change the flow of fw(1)->fw(2)->fw(3)->bw(1)->bw(2)->bw(3) to fw([1, 2, 3]) -> bw([1, 2, 3])?
BTW, which version of PyTorch are you using? If it’s <=v1.6, I would expect it throws an error here:
github.com
pytorch/pytorch/blob/b31f58de6fa8bbda5353b3c77d9be4914399724d/torch/csrc/distributed/c10d/reducer.cpp#L724 2
// want to start performing reductions on `torch.autograd.backward()`.
void Reducer::prepare_for_backward(
const std::vector<torch::autograd::Variable>& outputs) {
std::lock_guard<std::mutex> lock(mutex_);
std::unordered_set<torch::autograd::Node*> seen;
std::vector<torch::autograd::Node*> queue;
// Check that any prior reduction has finished.
// The variable `require_finalize_` is true until all gradients
// have been computed and reduction of all buckets has been kicked off.
if (require_finalize_) {
TORCH_CHECK(
false,
"Expected to have finished reduction in the prior iteration before ",
"starting a new one. ",
"",
"This error indicates that your module has parameters that were ",
"not used in producing loss. ",
"",
"You can enable unused parameter detection by (1) passing the keyword "
"argument `find_unused_parameters=True` to ", |
st177738 | Hi @mrshenli,
Thanks for your reply. My torch version is 1.6.0 and I receive no warnings/errors.
I’ve tried:
for _ in range(3):
output = ddp_model(torch.rand(20, 10)
loss_fn.backward(output, labels)
optimizer.step()
This works as I expected.
When you said fw([1, 2, 3]) -> bw([1, 2, 3]), do you mean the following?
outputs = (ddp_model(torch.rand(20, 10)), ddp_model(torch.rand(20, 10)), ddp_model(torch.rand(20, 10)))
for i in range(3):
(loss_fn(outputs[0], labels).backward(retain_graph=True), loss_fn(outputs[1], labels).backward(retain_graph=True), loss_fn(outputs[2], labels).backward(retain_graph=True))
optimizer.step()
This still doesn’t synchronize the weights and doesn’t throw any error. |
st177739 | @mrshenli This seems to be a gap in DDP where it doesn’t support running backward twice? I couldn’t find any tests that use retain_graph=True with DDP.
The problem seems to be this line: https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/reducer.cpp#L530 7, which skips any gradient reduction. This variable is set to false after the first backward is done: https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/reducer.cpp#L1152 2 and then never set to True again since prepare_for_backward is not called anymore.
@zzzf Is the workaround you mentioned in your previous reply sufficient for now? |
st177740 | I have the following code which I am trying to parallelize over multiple GPUs in PyTorch:
import numpy as np
import torch
from torch.multiprocessing import Pool
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()
def X_power_func(j):
X_power = X**j
return X_power
if __name__ == '__main__':
with Pool(processes = 2) as p: # Parallelizing over 2 GPUs
results = p.map(X_power_func, range(4))
results
But when I ran the code, I am getting this error:
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "<ipython-input-35-6529ab6dac60>", line 11, in X_power_func
X_power = X**j
RuntimeError: CUDA error: initialization error
"""
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
<ipython-input-35-6529ab6dac60> in <module>()
14 if __name__ == '__main__':
15 with Pool(processes = 1) as p:
---> 16 results = p.map(X_power_func, range(8))
17
18 results
1 frames
/usr/lib/python3.6/multiprocessing/pool.py in get(self, timeout)
642 return self._value
643 else:
--> 644 raise self._value
645
646 def _set(self, i, obj):
RuntimeError: CUDA error: initialization error
Where have I gone wrong? Any help would really be appreciated. |
st177741 | Solved by heavyfranz in post #4
Any news? Have you solved the problem? How? I think that the heart of @bapi answer is that you have to manually transfer each input array (a fraction of it or the same, it depends on your problem)
I solved like this:
import time
import torch
from torch.multiprocessing import Pool
torch.multiproces… |
st177742 | Leockl:
import numpy as np
import torch
from torch.multiprocessing import Pool
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()
def X_power_func(j):
X_power = X**j
return X_power
if __name__ == '__main__':
with Pool(processes = 2) as p: # Paralleizing over 2 GPUs
results = p.map(X_power_func, range(8))
results
By default, doing .cuda() will copy your tensor to device cuda:0. I do not see anywhere you have specified the device ids for multiple GPUs. Besides, results outside the scope of main will result in error. CUDA initialization error will be gone by using mp.set_start_method('spawn', force=True) before spawning the process pool, however, that would still not give you a correct implementation for what you are trying to do. |
st177743 | Many thanks @bapi
I added mp.set_start_method('spawn', force=True) into the code below. Would this be right?
import numpy as np
import torch
import torch.multiprocessing as mp
from torch.multiprocessing import Pool
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()
def X_power_func(j):
X_power = X**j
return X_power
if __name__ == '__main__':
mp.set_start_method('spawn', force=True)
with Pool(processes = 1) as p: # Paralleizing over 2 GPUs
results = p.map(X_power_func, range(2))
results
Also, how do I specify the device ids for multiple GPUs for my code?
Sorry if I have too many questions. |
st177744 | Any news? Have you solved the problem? How? I think that the heart of @bapi answer is that you have to manually transfer each input array (a fraction of it or the same, it depends on your problem)
I solved like this:
import time
import torch
from torch.multiprocessing import Pool
torch.multiprocessing.set_start_method('spawn', force=True)
def use_gpu(ind, arr):
return (arr.std() + arr.mean()/(1+ arr.abs())).sum()
def mysenddata(mydata):
return [(ii, mydata[ii].cuda(ii)) for ii in range(4)]
if __name__ == "__main__":
print('create big tensor')
aa = 10*torch.randn(4,10000,10000).double()
print('send data')
b = mysenddata(aa)
for ii in range(10):
pool = Pool(processes=4)
a = time.time()
print('start')
with Pool(processes=4) as p:
#result = pool.starmap(use_gpu, b,)
results = p.starmap(use_gpu, b,)
print('end')
print("cost time :", time.time() - a)
for ii, (rr, bb) in enumerate(zip(results, b)):
print('idx:{}, inshape:{}, indevice:{}, intype:{}, outshape:{}, outdevice:{}, outtype:{}'.format(ii, bb[1].shape, bb[1].get_device(), bb[1].type(), rr.shape, rr.get_device(), rr.type()))
This code seems ok for general gpu processing, but it will not work if the backward method has to be called. Do someone have a simple tutorial on simple multi gpu processing done on multi-gpus? |
st177745 | Hi @heavyfranz. I am afraid I haven’t found a solution for this problem yet, so your solution above helps!
When you say “backward” method, do you mean backpropagation? |
st177746 | How to split the dataset into 10 equal sample sizes in Pytorch?
The goal is to train on each set of samples individually and aggregate their gradient to update the model for the next iteration. |
st177747 | Solved by Ohm in post #8
A simple example to split dataset among n_workers:
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import numpy as np
import time
from worker import worker
from worker_new import worker_new
#the number of workers here:
n_workers = 4
testse… |
st177748 | How we can split 60,000 data(MNIST) into 10 parts in which the first part(data1) contains 6000 data, and the second part(data2) contains 6000 and so on. So how DistributedSampler will be used for this? |
st177749 | Below is the code in python using TensorFlow.
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = mnist.load_data()
%worker_X[i] contains the i’th part of training data’s features.
%worker_y[i] contains the i’th part of training data’s label
worker_X = []
worker_y = []
Ohm:
How we can split 60,000 data(MNIST) into 10 parts in which the first part(data1) contains 6000 data, and the second part(data2) contains 6000 and so on. So how DistributedSampler will be used for this?
% dataset size is 60000 . We want to split among 10 workers.
Batch = 60000//10
for i in range(10):
worker_X.append(X_train[i*Batch: Batch+i*Batch])
worker_y.append(y_train[i*Batch: Batch+i*Batch])
I am looking for the same thing in PyTorch. |
st177750 | @ohm The AWS tutorial goes over how to use the DistributedSampler to split your dataset into even parts: https://pytorch.org/tutorials/beginner/aws_distributed_training_tutorial.html#initialize-dataloaders 5. This is not an official PyTorch tutorial, but the “With multiprocessing” section in https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html 1 describes how to use DistributedSampler to do this for MNIST. |
st177751 | Thanks.
‘The AWS tutorial goes over how to use the DistributedSampler to split your dataset into even parts: pytorch.org/tutorials/beginner/aws_distributed_training_tutorial.html#initialize-dataloaders 4’
In this example, I did not see such a thing. The ‘num_workers=workers’ in this example is different from what I am looking for. As I mentioned I am looking for split data among workers. the fixed number of data in each worker! And I need to access them when I call a specific worker.
‘https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html’ I saw it before, again it is not clear how it split data among workers!?! Even it is not clear that it did it or |
st177752 | A simple example to split dataset among n_workers:
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import numpy as np
import time
from worker import worker
from worker_new import worker_new
#the number of workers here:
n_workers = 4
testset = torchvision.datasets.CIFAR10(root=’./data’, train=False, download=True, transform=transforms.ToTensor())
#Split testset, you can access the data from worker 1 with Worker_data[1], and so on.
temp = tuple([len(testset)//n_workers for i in range(n_workers)])
Worker_data = torch.utils.data.random_split(testset, temp) |
st177753 | @Ohm My apologies for the previous reply not being clear. The splitting automatically occurs within the DistributedSampler based on the rank that you provide. This piece of code 11 in the DistributedSampler would illustrate how this splitting is done per worker based on the rank. |
st177754 | Hello,
My research is about distributed deep learning, and I am looking for a research developing platform in which new ideas could be implemented. Regarding complex architecture, we should have more access to communications between nodes and GPUs.
I found TF-Replicator from Tensorflow, I saw Pytorch RPC and DDP APIs, and also Ray that is a new designed intesteing platform for emerged AI.
I could found any example and tutorial (Good ones) for TF-Repliicato or Ray. However, Pytorch has some good documents and I feel it gives more control over how we implement more complicated architecture and communication schema.
Therefore It would be great if others can also share their thoughts.
Thank you |
st177755 | Hey @sakh251
I just wanna mention one relevant WIP project that we are working on. We are running internal reviews on the code-level design on it, and should be able to share on Github in the next few weeks. This should help make customizing DDP a lot easier.
github.com/pytorch/pytorch
[RFC] Modularize DistributedDataParallel 1
opened
Apr 21, 2020
mrshenli
Summary
This project aims at decomposing existing DistributedDataParallel (DDP) implementation into multiple smaller pluggable and customizable building blocks. So that applications can...
feature
oncall: distributed
triaged |
st177756 | sakh251:
I could found any example and tutorial (Good ones) for TF-Repliicato or Ray
@sakh251 In terms of TF-Replicator (which is now part of tf.distribute.Strategy) are you referring to dedicated support for Parameter Server based training? |
st177757 | @pritamdamania87
I am looking for some high level api which can control the behaviour of learning. For example sending more data along with model or gradient. Or output of one network should feed to other network like GANs or Autoencoder when networks are on different machine. In this case we should have something like parameter server with more control. Now the question is which of these platforms provide more flexible high level api that is suitable for researchers? Not just use the developed strategies. |
st177758 | I typically see a ddp script being launched by submitting multiple commands (one per process), e.g.:
python -m torch.distributed.launch --nproc_per_node=1 --nnodes=3 --node_rank=0 --master_addr=127.0.0.1 --master_port=12345
python -m torch.distributed.launch --nproc_per_node=1 --nnodes=3 --node_rank=1 --master_addr=10.47.164.34 --master_port=12345
python -m torch.distributed.launch --nproc_per_node=1 --nnodes=3 --node_rank=2 --master_addr=10.47.164.34 --master_port=12345
torch.distributed.init_process_group(backend="nccl", init_method="env://")
However, I am using a cluster-management system and the admin would prefer I submit only command and hence it would have to be the same command.
Are there any examples of maybe using mpiexec (just to submit the command) or anything else - so that master, slaves, etc are created automatically? |
st177759 | Ilia_Karmanov:
However, I am using a cluster-management system and the admin would prefer I submit only command and hence it would have to be the same command.
I’m assuming you mean you’d like to use the same command on all the nodes to spawn the processes. You can use this command on all nodes, but we need to do something to handle the rank:
python -m torch.distributed.launch --nproc_per_node=1 --nnodes=3 --node_rank=<rank> --master_addr=10.47.164.34 --master_port=12345
You probably need some way of passing the appropriate unique rank in. Does your cluster-management system allow for maybe using environment variables which are different per nodes? If so, you can pass in the node_rank via an environment variable.
Another option might be using TorchElastic (which is a fault tolerant wrapper around DDP): https://pytorch.org/elastic/0.2.1/quickstart.html 4. TorchElastic figures out the rank automatically, so you can use the same command on all nodes. |
st177760 | Hi guys,
Is there any tutorial which shows how can we use distributed model training with SGE (Sun Grid Engine). In general I’m wondering how multiple nodes can communicate with each other in multiple node setup?
Cheers, |
st177761 | I’m not familiar with Sun Grid Engine, but if multiple nodes in the system can talk to each other over TCP, you can follow this tutorial: https://pytorch.org/tutorials/intermediate/dist_tuto.html 17. You probably want to use the TCP initialization method as described here: https://pytorch.org/tutorials/intermediate/dist_tuto.html#initialization-methods 6. |
st177762 | Hi,
in the DDP tutorial (https://pytorch.org/tutorials/intermediate/ddp_tutorial.html 5) the following code is shown to split a model onto two GPUs:
class ToyMpModel(nn.Module):
def __init__(self, dev0, dev1):
super(ToyMpModel, self).__init__()
self.dev0 = dev0
self.dev1 = dev1
self.net1 = torch.nn.Linear(10, 10).to(dev0)
self.relu = torch.nn.ReLU()
self.net2 = torch.nn.Linear(10, 5).to(dev1)
def forward(self, x):
x = x.to(self.dev0)
x = self.relu(self.net1(x))
x = x.to(self.dev1)
return self.net2(x)
How can I split a pretrained model (DeeplabV3Resnet101) onto different GPUs?
def getDeepLabV3Resnet101Pretrained(num_of_classes):
model = models.segmentation.deeplabv3_resnet101(pretrained=1)
# Change number of output classes
model.classifier[4] = nn.Conv2d(
in_channels=256,
out_channels=num_of_classes,
kernel_size=1,
stride=1
)
# And now how to put different model parts on different GPUs?
# Does model.children() help?
How would you determine where to split?
I would try to calculate the number of parameters for every model layer and then make more or less equal splits.
Would this be a good way?
Thanks! |
st177763 | Solved by pritamdamania87 in post #4
Do I also need to change this or does this “.to” work with nn.sequential (no separate forward function) as well?
“.to” would work on nn.sequential, although you need to modify the forward function since once you have completed execution for the module on GPU0, the output will be on GPU0. Now sinc… |
st177764 | I think you will need to manually place different layers on different GPUs. After that you will need to configure your forward function (similar to the ToyMpModel example you referenced), where you must send the the input batch to the first GPU, get the activations after passing through all of the layers on the first GPU, then send those activations to the next GPU, and so on until the last layer on the last GPU.
We currently don’t provide an automated way of splitting the model optimally across machines, but the approach you mentioned should work. In essence, I would compute the number of parameters in the model and try to create equal splits so that each GPU gets roughly similar number of parameters. |
st177765 | Thanks for your answer!
This basically means there is no easy way and I would need to modify a copy of the following code, right?
github.com
pytorch/vision/blob/master/torchvision/models/segmentation/deeplabv3.py 9
import torch
from torch import nn
from torch.nn import functional as F
from ._utils import _SimpleSegmentationModel
__all__ = ["DeepLabV3"]
class DeepLabV3(_SimpleSegmentationModel):
"""
Implements DeepLabV3 model from
`"Rethinking Atrous Convolution for Semantic Image Segmentation"
<https://arxiv.org/abs/1706.05587>`_.
Arguments:
backbone (nn.Module): the network used to compute the features for the model.
The backbone should return an OrderedDict[Tensor], with the key being
"out" for the last feature map used, and "aux" if an auxiliary classifier
This file has been truncated. show original
This uses nn.sequential.
Do I also need to change this or does this “.to” work with nn.sequential (no separate forward function) as well?
Thanks! |
st177766 | Do I also need to change this or does this “.to” work with nn.sequential (no separate forward function) as well?
“.to” would work on nn.sequential, although you need to modify the forward function since once you have completed execution for the module on GPU0, the output will be on GPU0. Now since the other module you want to execute is on GPU1, you need to move the output from GPU0 to GPU1 manually (using “.to”) and then you need to execute the module on GPU1. |
st177767 | As I understood, the Tutorial for Parameter server based on the RPC framework is a special implementation based on different assumptions.
1- The data should be sent to the parameter server (for a large dataset is it possible???)
2- it Async
3- distributed auto grad has been used
am I correct?
I am wondering is there any way to implement a sync parameter server with the RPC framework without moving data to the parameter server? I mean just use local grad and for example, model averaging on parameter server. any exist? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.