id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st178668 | Thanks for your reply. Now I can run my training program and GPU works, but nothing print in terminal. Why? |
st178669 | Hello,
I am in a very similar situation where I have a single node and 8 GPUs. I used the following resource as a guideline for distributed parallel training: https://github.com/dnddnjs/pytorch-multigpu 20
I was able to run this example fine, but when I try to load the model, I get the following error:
Expected tensor for argument #1 ‘input’ to have the same device as tensor for argument #2 ‘weight’; but device 4 does not equal 0 (while checking arguments for cudnn_convolution)
Could this be a problem in how I am loading the training data? |
st178670 | Hi, it means that your input and model weight are not on the same device, just like your input on GPU-0 while your model weight on GPU-1. Note that both input and weight must be obtained by same device. I think it may be caused by the wrong load way. Can you show your loading code or give an example? |
st178671 | Thanks for the reply. I believe I found the error. The original code was written for a single GPU and unfortunately there were multiple places where cuda:0 was hardcoded.
However, I do have a followup question. From all the examples I have seen so far, the dataloader is associated with DistributedSampler: https://pytorch.org/docs/stable/_modules/torch/utils/data/distributed.html 12
In many cases the dataloader loads files in a directory. In the case of using DDP, the DistributedSampler would be used to allocate files for each GPU such that each GPU would get a unique distribution of samples from the total dataset (I am assuming that the total data items is integer divisible by the number of GPUs).
In my case I am loading 3D data and then take patches of this data, which serves as the training input of the network. So one data file corresponds to more than one training input.
What I do currently is that I load the data by loading the files in a folder. Then I use a custom sampler that loads the patch indices and has an iterator that passes patches when called. I feed this sampler to a Dataloader, where I pass in the whole data, the sampler, and batch size. This works fine for one GPU.
I am now moving on to converting my code to DDP. I could put the DistributedSampler after my custom sampler, but I worry about the idea of multiple GPUs accessing the same file (again the input is a patch and different patches can come from the same file). Am I correct to say that this would be a problem?
Another approach could be to put the DistributedSampler before my current sampler. But I am a bit unsure how to hook up this DistributedSampler to my existing code.
I suppose yet another method could be to bypass using torch.utils.data.distributed.DistributedSampler and perhaps instead have my initial Dataset have a getitem that distributes the files among the GPUs in a manner similar to the DistributedSampler, and then keep the rest of my hooks the same. Or alternatively, in the main loop I could have the logic for handling the distribution of files and pass this into the spawned process.
Would one approach be better than others? Or should I be using another approach altogether? Does the DDP work properly if the code does not use the DistributedSampler? |
st178672 | @mrshenli
Can you please define the terms: 1) node, 2) process in the context you are using them?
If I want to train my model on 4 GPUs, do you call it 4 processes? or 1 process?
Does init_method correspond to the address of my PC or to the GPU I’m accessing on a cluster?
In this 1 tutorial, what were you referring to as machine?
@mingyang94
Can you please explain how you arrived at:
mingyang94:
init_method |
st178673 | Hey @spabho
Can you please define the terms: 1) node, 2) process in the context you are using them?
We usually use one node/machine/server to represent one physical computer which can be equipped with multiple GPUs.
One process is in the context of process/thread.
If I want to train my model on 4 GPUs, do you call it 4 processes? or 1 process?
In this case, using 4 processes with DDP should give you the best performance.
Does init_method correspond to the address of my PC or to the GPU I’m accessing on a cluster?
It corresponds to the address of your PC. It is giving some information for the 4 DDP processes to perform rendezvous.
In this 2 tutorial, what were you referring to as machine?
Machines should always refer to node/server/machine/computer.
To be clear, there are three concepts involved in DDP training:
Node/Machine/Server: a physical computer that can contain multiple GPUs. It can also talk to other node through network.
GPU: just one GPU
Process: Each process should run its own DDP instance. Usually Each DDP instance should exclusively operate on one GPU (if your model can fit in one GPU), and DDP instances will talk to each other through network.
This 6 example might serve better as a starting point for DDP. |
st178674 | Hi @mrshenli,
I was looking at the tutorial 3 you mentioned.
In the example, it says that
This example uses a torch.nn.Linear as the local model, wraps it with DDP, and then runs one forward pass, one backward pass, and an optimizer step on the DDP model. After that, parameters on the local model will be updated, and all models on different processes should be exactly the same.
I’m just wondering what it means by local model, and what’s the difference between the local model and the models on different processes.
Thanks! |
st178675 | rzhang63:
I’m just wondering what it means by local model, and what’s the difference between the local model and the models on different processes.
Hey @rzhang63, each process runs its own DDP model, which wraps the local model. DDP does not own any parameters (but it will create buffers/buckets for communication). So from the perspective of parameters, they are the same.
The reason for calling them local model vs DDP model is that, local model by itself does not perform communication across processes. DDP takes care of communication to make sure that different local models on difference processes are always in sync, as if all processes are operating on the same global module. |
st178676 | Hi @mrshenli,
Thank you for your reply. I was running the example code in the tutorial but I got the following error:
AttributeError: Can’t get attribute ‘demo_basic’ on <module ‘main’ (built-in)>
image1136×443 27.9 KB
Do you know why this happened? |
st178677 | rzhang63:
Thank you for your reply. I was running the example code in the tutorial but I got the following error:
The link provided above points to the DDP example, but demo_basic is one function from https://pytorch.org/tutorials/intermediate/ddp_tutorial.html 1. Are you mixing these two? |
st178678 | The code I used is
import os
import tempfile
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = nn.Linear(10, 10).to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def main():
world_size = 2
mp.spawn(demo_basic,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__=="__main__":
main() |
st178679 | Hey @rzhang63 sorry that I mis-read your log picture. Are you using Windows? Currently PT Distributed does not support windows yet. We have a poll here: https://github.com/pytorch/pytorch/issues/37068
If running on Linux, it should work with a minor fix.
Change
labels = torch.randn(20, 5).to(rank)
to
labels = torch.randn(20, 10).to(rank) |
st178680 | I just switched to Linux and changed the labels to labels = torch.randn(20, 10).to(rank), but I still got the following error:
image1678×598 214 KB |
st178681 | This seems to be a multiprocessing pickle issue. How did you launch it. Is it sth like python test.py from command line or through notebook?
And can you confirm you see the same error even if you remove all torch.distributed code? Say make demo_basic into the following and remove other functions?
def demo_basic(rank, world_size):
pass |
st178682 | This looks relevant to the error you are seeing: https://github.com/ipython/ipython/issues/10894 9 |
st178683 | When I used Distributed dataparallel to replace dataparallel,the result of the validation set becomes very poor, as in the case of overfitting. I used 4 GPUs, one process per GPU, keeping the learning rate and batchsize unchanged.The following is all the code related to DPP:
dist.init_process_group(backend='nccl')
torch.cuda.set_device(args.local_rank)
device = torch.device("cuda", args.local_rank)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_set)
train_loader = torch.utils.data.DataLoader(
train_set, batch_size=args.batch_size,
num_workers=args.workers,sampler=train_sampler, pin_memory=True, shuffle=(train_sampler is None))
val_sampler = torch.utils.data.distributed.DistributedSampler(val_set)
val_loader = torch.utils.data.DataLoader(
val_set, batch_size=args.batch_size,
num_workers=args.workers, pin_memory=True, shuffle=False,sampler=val_sampler)
model = models.__dict__[args.arch](network_data).to(device)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank])
cudnn.benchmark = True
for epoch in tqdm(range(args.start_epoch, args.epochs)):
# train for one epoch
train_sampler.set_epoch(epoch)
train_loss=train(......)
dist.reduce(train_loss, 0, op=dist.ReduceOp.SUM)
print(train_loss/nb_gpus)
test_loss=validate(.....)
dist.reduce(test_loss, 0, op=dist.ReduceOp.SUM)
print(test_loss/nb_gpus)
blue curve is the result of validation set |
st178684 | Solved by mrshenli in post #2
Hey @111344
If each DDP (DistributedDataParallel) process is using the same batch size as you passed to DataParallel, then I think you need to divide the reduced loss by world_size. Otherwise, you are summing together losses from world_size batches.
Another thing is that batch size and learning ra… |
st178685 | Hey @111344
If each DDP (DistributedDataParallel) process is using the same batch size as you passed to DataParallel, then I think you need to divide the reduced loss by world_size. Otherwise, you are summing together losses from world_size batches.
Another thing is that batch size and learning rate might need to change when switched to DDP. Check out the discussions below:
Should we split batch_size according to ngpu_per_node when DistributedDataparallel 18
Is average the correct way for the gradient in DistributedDataParallel with multi nodes? 14
And this briefly explains how DDP works: https://pytorch.org/docs/master/notes/ddp.html 17 |
st178686 | Thanks for your answer,it helped me a lot.
One conclusion I got from these materials is that I should set torch.utils.data.DataLoader(batch_size=args.batch_size/world_size)
lr still be 1xlr.
Is this correct? |
st178687 | 111344:
torch.utils.data.DataLoader(batch_size=args.batch_size/world_size)
Yes, this should let the DDP gang collectively process the same number of samples compared to the single process case. But it may or may not stay mathematically equivalent due to the loss function. DDP is taking average of grads across processes. So if the loss function is calculating sum loss of all samples or if (loss(x) + loss(y)) / 2 != loss([x, y]) / 2, it won’t be mathematically equivalent. Hence, it might take some efforts to optimizer the lr and batch size when using DDP. |
st178688 | Hey,sorry for late reply.
My loss function is defined as follows:
loss = torch.norm(target_flow - input_flow, 2, 1)/batch_size
In https://discuss.pytorch.org/t/is-average-the-correct-way-for-the-gradient-in-distributeddataparallel-with-multi-nodes/34260 1
there are some discussions on how to calculate loss,it seems that DDP will automatically do batchsize average operation on loss,so do I need to manually average the loss? |
st178689 | 111344:
there are some discussions on how to calculate loss,it seems that DDP will automatically do batchsize average operation on loss,so do I need to manually average the loss?
No, you don’t need to manually average the loss. When using DDP, losses are local to every process, and DDP will automatically average gradients for all parameters using AllReduce communication.
My loss function is defined as follows:
loss = torch.norm(target_flow - input_flow, 2, 1)/batch_size
The batch_size here is the per-process input batch size, right? |
st178690 | Yes,it’s per-process batch_size.
In fact, I think the problem is basically solved after dividing Batchsize by ngpus (although performance is still slightly behind DP, but this should be a tuning problem)
Thank you for your help. Best wishes! |
st178691 | The ImageNet example 45 has a DistributedSampler for the training loader, but not the validation loader. This would appear to have every rank processing the entire data for the validation set. Is this necessary, or could a DistributedSampler be used for the validation loader also, to apply the multiple nodes to processing the validation set? |
st178692 | Hi @churchillmic,
I have the same query. Did you able to find the answer for this?
Thanks
Anil |
st178693 | I found a couple examples where a DistributedSampler was used for the validation or test set. I’m still not sure why the official Imagenet example doesn’t use it, it still seems wasteful to me. Here are a few of the examples:
github.com
huggingface/pytorch-pretrained-BERT/blob/c9fd3505678d581388fb44ba1d79ac41e8fb28a4/examples/extract_features.py 58
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HugginFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Extract pre-computed feature vectors from a PyTorch BERT model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
This file has been truncated. show original
github.com
Jongchan/Pytorch-Horovod-Examples/blob/master/examples/cifar100/main_horovod.py 42
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import config as cf
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import os
import sys
import time
import argparse
import datetime
This file has been truncated. show original |
st178694 | It is not necessary to have every rank process the entire validation set. You can use a distributed sampler and average the errors afterwards to achieve the same result. |
st178695 | Actually, you cannot use ddp sampler to achieve validation. You can see DistributedSampler 35; note that the dataset has added extra samples to make it evenly divisible. Therefore, if your dataset is very small, the final result may be different. The official implementation is right. |
st178696 | Hello,
I am trying to deploy a crash-resilient distributed deployment in PyTorch. Assume that node of rank 0 controls the learning process. Now, if this node fails/crashes, I want node 1 to continue the job of node 0 and somehow notifies other nodes of the change. One way to do this is to assign the rank 0 to node 1 so that all nodes can communicate directly as usual with node 0 (which was node 1 before the change). Is there any way to do this?
Thanks |
st178697 | Solved by mrshenli in post #2
Hey @aguirguis, if you are looking for elastic training for distributed data parallel, torchelastic is built for this purpose. It will conduct re-rendezvous on living nodes when failure occurs. |
st178698 | Hey @aguirguis, if you are looking for elastic training for distributed data parallel, torchelastic 4 is built for this purpose. It will conduct re-rendezvous on living nodes when failure occurs. |
st178699 | thanks @mrshenli. @aguirguis, here’s the quickstart guide for TorchElastic (http://pytorch.org/elastic/0.2.0rc0/quickstart.html 1). If you are familiar with torch.distributed.launch things should look familiar. We’ve also written a kubernetes controller in collaboration with EKS, which you can check out here: http://pytorch.org/elastic/0.2.0rc0/kubernetes.html 1 |
st178700 | Thank you @mrshenli and @Kiuk_Chung for your responses; they are really helpful.
I have two follow-up questions:
According to this design, I have to run an etcd server, right? this is still a single point of failure. Is there any way to circumvent that?
Is there any way to force some specific values for ranks to some specific nodes?
Thanks a lot. |
st178701 | You can run a fault tolerant etcd cluster by having etcd backed by multiple machines. You can find more information about it under the “Deployments” section in the FAQ page: https://etcd.io/docs/v3.4.0/faq/ 2
Short answer is no. There are two ranks in elastic: 1. node_rank (its called GROUP_RANK), and 2. worker_rank (RANK). We don’t allow the ranks to be manually overridden because “sticky” ranks and elasticity do not play well together. Could you describe the use-case you have in mind? This way I can brainstorm a way to make things work without hardcoding ranks to specific nodes. |
st178702 | Thanks @Kiuk_Chung for your answers. I will check the etcd FAQ.
I’m thinking of a parameter server deployment with a crash tolerance to the central server. I want to deploy, let’s say, 2 servers so that if one crashes, the other one takes over. Yet, I want this to be transparent to the workers, i.e., they still send their gradients normally to the process with rank 0.
Is there an easy and cheap way to do this currently?
Thanks! |
st178703 | You could do this using torch rpc. Say your parameter servers follow some naming convention like “ps:##” and you know the total ps replicas. Then you could round robin or fall back to the surviving parameter servers if an rpc call fails.
How do you plan on keeping the data in the parameter servers replicated so that you are tolerant to ps failures? |
st178704 | Yes, I was actually thinking of using rpc. My only concern is about its performance, compared to the other collective primitives (e.g., gather, broadcast…etc.). Can you comment on this performance comparison?
The problem of consistency among PSes could be solved using checkpoints. Assume there is one primary PS, I’d let this PS only to update the model and periodically checkpoint it to some persistent database. If it crashes, the backup PS first loads the latest checkpoint and then continues training normally. How does that sound? |
st178705 | Hello,
In this nice tutorial 1, it is described how to implement a parameter server (PS) deployment with RPC.
What confuses me is that it is not clear where does the execution (forward and backward passes) happen in this example…is it on the trainer machines or the PS machine?
It makes sense that these computations happen on the trainers yet, the forward pass is defined as a part of the PS code, which is called remotely by the trainer! would you please clarify this part?
Also, as far as I understand, usually, the PS collects the trainers’ gradients and aggregates them (for instance by averaging) to update the global model. Does this aggregation part exist in the tutorial? if yes, would you please explain where? if no, then what’s the strategy to incorporate all trainers’ work in the global model.
A follow-up question on that last point: how can I apply more sophisticated aggregation functions on the gradients (let’s say I want to take median instead of mean on the PS machine)…how can I do this?
Thank you very much! |
st178706 | Solved by mrshenli in post #2
Hey @aguirguis
What confuses me is that it is not clear where does the execution (forward and backward passes) happen in this example…is it on the trainer machines or the PS machine?
The executions scattered on both trainer and PS:
Forward pass: input (on trainer) -> parameter (on PS) -> outpu… |
st178707 | Hey @aguirguis
What confuses me is that it is not clear where does the execution (forward and backward passes) happen in this example…is it on the trainer machines or the PS machine?
The executions scattered on both trainer and PS:
Forward pass: input (on trainer) -> parameter (on PS) -> output (on trainer) -> loss (on trainer)
So, for each trainer, there are three pieces of autograd graphs scattered on the trainer and the PS, and those are connected by RPC.
It makes sense that these computations happen on the trainers yet, the forward pass is defined as a part of the PS code, which is called remotely by the trainer! would you please clarify this part?
Yes. Both TrainerNet and ParameterServer have a forward function as they are both nn.Module subclasses. TrainerNet forward calls ParameterServer forward. An analogy would be if you have nn.Sequential(nn.Linear(10, 10)), both Sequential and Linear implement their own forward and Sequential's forward calls Linear's forward. This is also why the autograd graph scatters on both the trainer and the parameter server.
Also, as far as I understand, usually, the PS collects the trainers’ gradients and aggregates them (for instance by averaging) to update the global model. Does this aggregation part exist in the tutorial?
No, the aggregation is not part of the tutorial. Instead, it is doing sth similar to Hogwild! 1 training. The aggregation method you mentioned are one type of synchronized training, where all trainers need to wait for that aggregation to finish before they can proceed. This works, and it is also possible to implement this synchronous aggregation using torch.distributed.rpc, but it won’t scale to large workload on large clusters, as any straggler will kill the performance.
if no, then what’s the strategy to incorporate all trainers’ work in the global model.
One option would be letting the PS to send parameters to trainers, and then get back gradients. Then, the PS aggregates the grads as you mentioned above, and updates its parameters. The PS can keep doing this until the model converges. This is similar to this RL tutorial 3 where the agent tells observers what to do and the agent owns the policy model.
A follow-up question on that last point: how can I apply more sophisticated aggregation functions on the gradients (let’s say I want to take median instead of mean on the PS machine)…how can I do this?
If adopting the RL tutorial above, then the PS can collect the gradients from different trainers in a list and compute the medium accordingly.
cc the author of the PS tutorial @rvarm1 |
st178708 | Thanks, @mrshenli for your detailed answers. I have a few follow-up questions:
mrshenli:
input (on trainer) -> parameter (on PS) -> output (on trainer)
Does this mean the input is sent to the PS first to propagate through the network parameters in the PS and then at the end, the output is sent back to the trainer?
What confuses me here is that it seems that the forward function of TrainerNet is just dummy and all what it does is calling that of ParameterServer. As far as I understand, in the PS architecture, data never leaves the trainer machine and that the whole gradient computation process should be done entirely locally on the trainer machine.
If you can describe all the communication that happens in one training iteration, that would be great. For instance, assume that we have one PS machine and two trainer machines. PS has the model and each trainer has a few data samples. What is sent to whom?
Hogwild! assumes shared memory so, the setup is inherently different from that of the PS, right? I cannot entirely digest how/why do you blend these two setups. Would you please clarify?
Thanks a lot. |
st178709 | aguirguis:
Does this mean the input is sent to the PS first to propagate through the network parameters in the PS and then at the end, the output is sent back to the trainer?
Yes.
What confuses me here is that it seems that the forward function of TrainerNet is just dummy and all what it does is calling that of ParameterServer .
Yes, you are right. In this specific case, as x does not require grad, there is no need to link it to the distributed autograd graph. So there are only two pieces of autograd graph on PS and the trainer. (I was wrong when saying there three pieces in previous comments.)
As far as I understand, in the PS architecture, data never leaves the trainer machine and that the whole gradient computation process should be done entirely locally on the trainer machine.
There are different ways to implement this. Imagine there is a super large embedding table and the trainer only holds a several lookup indices in each iteration. One solution is to do training all on trainer, but then the application will need to implement update functions that converts indices and gradients from the trainer back to the embedding table gradients. Another option is let autograd engine taking care of this, and simply calling loss.backward() on trainer will be sufficient to update embedding table on ps.
If you can describe all the communication that happens in one training iteration, that would be great. For instance, assume that we have one PS machine and two trainer machines. PS has the model and each trainer has a few data samples. What is sent to whom?If you can describe all the communication that happens in one training iteration, that would be great. For instance, assume that we have one PS machine and two trainer machines. PS has the model and each trainer has a few data samples. What is sent to whom?
Sure. Since trainers are independent in that tutorial IIUC, I will only describe what happens between a PS-trainer pair.
In forward pass, there are two comms: 1) trainer -> ps to send input sample 2) ps -> trainer to send the output
In the backward pass, there is one comm: trainer -> ps to send the gradients for the model outputs, which will then trigger local autograd engine on the ps to compute gradients on the model.
In the optimizer step pass, there is one comm: trainer -> ps tell the local optimizer on ps to update model parameters. (It is possible to pack this into the comm during the backward pass using hooks.)
Since there are two trainers accessing the same model, instead of storing the grads in param.grad, the ps will put those grads in dedicated contexts associated with each distributed autograd context, and those grads will later be consumed by the distributed optimizer.
More details about dist autograd can be found here: https://pytorch.org/docs/stable/rpc/distributed_autograd.html#distributed-autograd-design
Hogwild! assumes shared memory so, the setup is inherently different from that of the PS, right? I cannot entirely digest how/why do you blend these two setups. Would you please clarify?
Right, the original paper was mainly focusing on shm. But the lock-free spirit can be apply to distributed training as well. This is especially useful for training using large dataset with large embedding tables. |
st178710 | Thanks @mrshenli for your detailed answers. Now, everything is clear to me.
Probably a better design for my use case is to use local optimizer and autograd as in the RL tutorial you referred to before. |
st178711 | Hi all,
I am using RPC to send and receive data with multiple CPUs.
Currently, I use “gloo” backend.
My code is:
rpc.rpc_async(my_target, add_outlinks, args=(arr_send[i],source))
I have nearly 1 million objects.
For each object, I send to (size-1) workers to run the function add_outlinks.
Totally, with 1 million objects, we have to send 1*(size-1) times.
arr_send[i] is from 10 to 1 000 000 numbers.
I run the same function on UPC++, It takes around several minutes. However, with torch-RPC, I set timeout to 10000s ~ 166 minutes, and the program is topped by timeout.
Could you tell me the best way to send data with Pytorch?
Thanks, |
st178712 | Is it possible to consolidate multiple arr_send[i] into one RPC? Is there any reason that they need to be sent using dedicated RPCs?
Curious:
After 10000s, how many objects are processed?
Is distributed autograd/optimizer also used in this case?
Are any of those CPUs locate on the same machine? (so that shm can be helpful) |
st178713 | I use as
def add_outlinks(arr, source):
for dest in arr:
if int(dest) in _local_dict:
_local_dict[dest].in_links.append(int(source))
rpc.init_rpc(my_name, rank=rank, world_size=size,rpc_backend_options=rpc.ProcessGroupRpcBackendOptions(num_send_recv_threads=16,rpc_timeout=datetime.timedelta(seconds=10000))) # initial_rpc
#CALL rpc TO OTHER RANKS
if rank==0:
print("add-link...")
try:
array_rpc = list(range(0, size))
count=0
for it in _local_dict:
count = count+1
arr_send = []
for i in range(0, size):
arr_send.append([])
u = _local_dict[it]
source = u.vertexId
for i in u.links:
arr_send[int(i) % size].append(int(i))
for i in array_rpc:
my_target = "worker" + str(i)
if len(arr_send[i])>0:
rpc.rpc_async(my_target, add_outlinks, args=(arr_send[i],source))
except:
print("rank ",rank," run ",count,"/",len(_local_dict))
rpc.api._wait_all_workers()
print("shutdown.... rpc... ", rank)
rpc.api._wait_all_workers()
rpc.shutdown()
arr_send[i] will send to rank i
For elements in _local_dict, we can run parallel.
1. After 10000s, how many objects are processed?
–> the outputs are as below. The worker0 is not see in the output. I try to print “count” value. But there is no output for “count” variable.
....
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:45462
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:44970
....
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:19635
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:27553
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:44970
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:47501: Connection reset by peer
Traceback (most recent call last):
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
File "pagerank.py", line 380, in init_process
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:44931: Connection reset by peer
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 77, in wrapper
return func(*args, **kwargs)
...
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:378] writev [2001:700:4a01:10::38]:22942: Connection reset by peer
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 240, in shutdown
_wait_all_workers()
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 77, in wrapper
return func(*args, **kwargs)
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:5848: Connection reset by peer
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:50095: Connection reset by peer
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:29331
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:57022: Connection reset by peer
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 165, in _wait_all_workers
args=(sequence_id, self_worker_name,),
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:15236
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:2720
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:23214
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:38547
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:50607
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 77, in wrapper
return func(*args, **kwargs)
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
...
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:12173
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 554, in rpc_sync
return fut.wait()
RuntimeError: Encountered exception in ProcessGroupAgent::enqueueSend: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:57022: Connection reset by peer
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:3715
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:2693
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
During handling of the above exception, another exception occurred:
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:9877
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:3715
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
Traceback (most recent call last):
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:378] writev [2001:700:4a01:10::38]:22942: Connection reset by peer
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:36780: Connection reset by peer
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:47501: Connection reset by peer
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:44931: Connection reset by peer
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:3213: Connection reset by peer
Traceback (most recent call last):
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:13723
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "pagerank.py", line 380, in init_process
print("shutdown.... rpc... ", rank)
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:9140
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 77, in wrapper
return func(*args, **kwargs)
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 240, in shutdown
_wait_all_workers()
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 77, in wrapper
return func(*args, **kwargs)
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 165, in _wait_all_workers
args=(sequence_id, self_worker_name,),
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 77, in wrapper
return func(*args, **kwargs)
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/site-packages/torch/distributed/rpc/api.py", line 554, in rpc_sync
return fut.wait()
RuntimeError: Encountered exception in ProcessGroupAgent::enqueueSend: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:566] Read error [2001:700:4a01:10::38]:9889: Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
File "/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
KeyboardInterrupt
[E thread_pool.cpp:112] Exception in thread pool task: Application timeout caused pair closure
[E thread_pool.cpp:112] Exception in thread pool task: [/opt/conda/conda-bld/pytorch_1587428228634/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [2001:700:4a01:10::38]:34716
I changed the code to print count as
for it in _local_dict:
if(rank==0:)
count = count+1
print(count)
–> all elements in _local_dict is run. However, the program stopped by timeout.
2. Is distributed autograd/optimizer also used in this case?
–> Not yet. It is at https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html ??
3. Are any of those CPUs locate on the same machine? (so that shm can be helpful)
–> I use 32 CPUs on this machine. (All of my CPU on the machine.)
Thanks |
st178714 | I try with small data (~60 elements in _local_dict), and it worked.
Perhaps, there are problems with a lot of rpc call?
For real data, size of _local_dict is nearly 190000 elements - 32 workers. With each element, I called 32 times for rpc(…).
For each worker, we have to call 19000032 times for rpc(…). We have 32 workers. Hence, totally, there are 19000032*32 times for calling RPC.
Is there any problem with that?
Thanks, |
st178715 | Hi,
I delete RPC and try to run with the local
_local_dict
each worker holds 1/32 of total objects. Then, I run the program. It takes nearly 5s.
However, It can not finish with RPC (timeout=10000s).
Perhaps, the problem has belonged to the Queue size of RPC backend.
Please help!
Thanks, |
st178716 | Hi,
I add future variable for the rpc.rpc_async.
It worked. However, it is very slow.
It took around 30 minutes to finish this procedure.
ph0123:
for it in _local_dict:
count = count+1
arr_send = []
for i in range(0, size):
arr_send.append([])
u = _local_dict[it]
source = u.vertexId
for i in u.links:
arr_send[int(i) % size].append(int(i))
futs = [] #add here
for i in array_rpc:
my_target = "worker" + str(i)
if len(arr_send[i])>0:
futs.append(rpc.rpc_async(my_target, add_outlinks, args=(arr_send[i],source)))
for fut in futs: #add here
fut.wait() #add here
I try to put
for fut in futs: #add here
fut.wait() #add here
outside of the loop. The program is not finished and stopped by time out.
Could you explain why? and could you please suggest some ways to decrease the excuse time? I run the program with UPC++, it take some minutes to finish (~ 5-6 minutes)
Thanks |
st178717 | Sorry about the delay.
rpc.api._wait_all_workers() is not supposed to be directly called by applications. I was only referencing it as one example to use rank 0 as a coordinator. The recommended way is doing sth like below:
futs = []
for i in array_rpc:
futs.append(rpc.rpc_async(my_target, add_outlinks, args=(arr_send[i],source)))
...
for fut in futs:
fut.wait()
Allow me some time, still reading the contents above.
Edits
I see you are already using future in the latest version.
Could you explain why? and could you please suggest some ways to decrease the excuse time? I run the program with UPC++, it take some minutes to finish (~ 5-6 minutes)
Given that the comp only takes 5s, it looks like pickle and comm takes the majority of the time? Curious, can you profile the time taken for one rpc_sync?
The communication pattern above looks like an allToAll. Is it guaranteed that the data sent from rank x to rank y will be of the same size. If so, you can use the allToAll API 1 (not in v1.5, only available on master now, will come to v1.6).
Regarding ways to speed up, one easy attempt would be increasing the number of threads. The program currently sets num_send_recv_threads=16. You could try 64. This could help speeding up deserialization on the server side. But the serializer on the sender side still occurs inline, we probably should also offload that to thread-pool in future releases.
I take that back. I think one major reason is that the pickle and execution both require Python GIL, and hence they won’t run in parallel on the server side. Can you try using TorchScript functions? You can find examples here 1 by searching for @torch.jit.script. Even if we use @torch.jit.script functions, the pickle/unpickle would still require GIL as those are Python objects, but the execution can run in parallel. |
st178718 | ph0123:
the outputs are as below. The worker0 is not see in the output. I try to print “count” value. But there is no output for “count” variable.
It means the program didn’t git the except branch below:
except:
print("rank ",rank," run ",count,"/",len(_local_dict))
If there are exceptions occur when processing the RPC in the try block, it will be thrown locally when you call fut.wait() or rref.to_here().
Given the above code, the timeout won’t occur in the try block, as rpc_async always returns immediately with a future object. |
st178719 | ph0123:
I use 32 CPUs on this machine. (All of my CPU on the machine.)
In this case, the new TensorPipe (coming in v1.6) RPC backend might be useful, as it provides a shm channel to send RPC data. Will keep you posted on that. |
st178720 | Thank you for your responses.
I will wait the 1.6 version and try to work with rpc.ref.
I will compare and share with you about the results.
Thanks, |
st178721 | Hey @ph0123
Thanks. Regarding the TorchScript functions, double checked with the team, it’s only the serialization on the caller side is holding GIL as of today. The derserialization on the callee can run in parallel. Curious to see how much TorchScript functions can help. |
st178722 | Hi,
I try torch.jit.script as
@torch.jit.script
def add_outlinks(arr, source):
for dest in arr:
if _local_dict.get(dest):
_local_dict[dest].in_links.append(int(source))
On my program, I defined that _loca_dict is a global variable. Therefore, there is no error here. When I add “@torch.jit.script”. The terminal shown the error as
Traceback (most recent call last):
File "/mnt/c/python_project/Pytorch/test.py", line 222, in <module>
@torch.jit.script
File "/home/cnphuong/.local/lib/python3.6/site-packages/torch/jit/__init__.py", line 1290, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
RuntimeError:
attribute lookup is not defined on python value of type 'dict':
File "/mnt/c/python_project/Pytorch/test.py", line 225
def add_outlinks(arr, source):
for dest in arr:
if _local_dict.get(dest):
~~~~~~~~~~~~~~~ <--- HERE
_local_dict[dest].in_links.append(int(source))
I have to install pytorch from source or not?
My pytorch version is 1.5 and cpu only.
Thanks, |
st178723 | That probably means TorchScript does not support dict.get global variable yet.
cc JIT expert @Michael_Suo
Let me try this locally. |
st178724 | Just check with JIT team, TorchScript functions does not support global variables. One alternative is to do it through RRef.local_value(). Let me prepare an example for you. |
st178725 | Hi @ph0123
I tried this https://github.com/pytorch/pytorch/pull/39900 1
But it actually exposes another gap in TorchScript function type annotation. JIT team is investigating. |
st178726 | Fixes are coming to master:
github.com/pytorch/pytorch
[rpc] use annotation_str for RRef type serialization 3
pytorch:gh/wanchaol/111/base ← pytorch:gh/wanchaol/111/head
opened
Jun 12, 2020
wanchaol
+3
-3
github.com/pytorch/pytorch
[rpc] fix RRef alias annotation 3
pytorch:gh/wanchaol/112/base ← pytorch:gh/wanchaol/112/head
opened
Jun 12, 2020
wanchaol
+24
-3 |
st178727 | Q1: If I have two models named A and B, both wrapped with DDP, and loss = A(B(inputs)), will DDP work? It seems that gradients will be sync when loss.backward() is called.
Q2: If loss = A(B(inputs1), B(inputs2)), will DDP work ? The forward funciton of B is called twice . btw, I don’t know what does reducer.prepare_for_backward do… |
st178728 | zhouzh:
Q1: If I have two models named A and B, both wrapped with DDP, and loss = A(B(inputs)), will DDP work?
It should work. This is using the output from B(inputs) to connect two graphs together. The AllReduce communication from A and B won’t run interleavingly I think. If it hangs somehow, you could trying setting the process_group argument of two DDP instances to different ProcessGroup objects created using the new_group 1 API. This will fully decouple the communication of A and B.
It seems that gradients will be sync when loss.backward() is called.
Yes, see this page for more detail: https://pytorch.org/docs/stable/notes/ddp.html 11
Q2: If loss = A(B(inputs1), B(inputs2)), will DDP work ? The forward funciton of B is called twice . btw, I don’t know what does reducer.prepare_for_backward do…
This won’t work. DDP requires forward and backward to run alternatively. The above code would run forward on B twice before one backward, which would mess up DDP internal states. However, the following would work. Suppose the local module wrapped by B is C
class Wrapper(nn.Module):
def __init__(self):
self.c = C()
def forward(inputs):
return self.c(inputs[0]), self.c(inputs[1])
B = DistributedDataParallel(Wrapper(), ...)
loss = A(B([input21, inputs2]))
This is basically using a sheer wrapper over C to process two inputs in one forward call. |
st178729 | Hi,
I have a different but related problem.
I have a detection model with unfixed input size. At some extreme case, the RuntimeError with OOM occurs, so I wrap the forward+backward around try+except
for images, targets in data_loader:
images = images.to(device)
targets = [target.to(device) for target in targets]
try:
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
optimizer.zero_grad()
losses .backward()
except Exception as ex:
torch.cuda.ipc_collect()
torch.cuda.empty_cache()
continue
this would help to some extend, until:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing its output (the return value of forward). You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel. If you already have this argument set, then the distributed data parallel module wasn’t able to locate the output tensors in the return value of your module’s forward function. Please include the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). (prepare_for_backward at /opt/conda/conda-bld/pytorch_1556653099582/work/torch/csrc/distributed/c10d/reducer.cpp:408)
To be clear, I already set find_unused_parameters=True, and broadcast_buffers=False. My guess is some internal state was strained somehow. Is it possible to reset all the state during my except catch? |
st178730 | Hey @qianyizhang
Yes, the error is expected. Because, say you have two DDP processes, X and Y. If process X hits a OOM error in the forward pass of one iteration but Y runs correctly, as a result, X would skip its backward pass in that iteration causing a de-synchronization. DDP itself cannot recover from this error.
However, torchelastic 3 is built to solve this exact problem. It would kill the entire DDP gang, reconstruct a new DDP gang, and revert to the previous checkpoint when such OOM occurs. cc @Kiuk_Chung |
st178731 | Adding a bit more context to @mrshenli’s comments, you could try to reset the DDP state by calling destroy_process_group() and re-initializing it, however that doesn’t guarantee that your tensors (distributed among multiple workers) are also reset. In short, a complete state reset on the worker is application dependent (and often non-trivial). For transient exceptions you can use torchelastic to launch your workers, and just let the worker process throw the exception out and fail. Elastic will monitor the worker pids and will restart the world if it detects that one (or more) workers have failed. Note, that due to this behavior, (assuming you have checkpoints) you will lose progress between checkpoints. |
st178732 | @Kiuk_Chung Can your elaborate more on the last part?
My goal is to keep the training process going with minimum withdraw from failed synchronization.
I assume your elastic feature offers the flexibility of leaving + rejoining the sync pool at any given point?
If one failed worker = all process have to restart from last checkpoint, it’s basically the same as I ran a background monitor process constantly checks->kills->restarts the whole process, which is not very efficient… |
st178733 | the funny part is the mentioned RuntimeError only happens 10% of the time, while the training process could tough it out most of times as if one worker has a slow start (by retraining a second round). I suspect it hits OOM before finish computing first bucket of gradients… |
st178734 | Its the later, elastic monitors worker pids and restarts the world whenever one or more pids fail. Elastic was designed to be more general purpose to fit a variety of distributed training use-cases. For pure torch rpc apps it might be possible to allow workers to leave/join the job on the fly, but if you are using process groups the backend may not allow this by design - for instance if you are using NCCL backend, NCCL itself does not allow resizing of communicators unless you destroy them and restart them. |
st178735 | @Kiuk_Chung I see.
But is it possible to reset DDP state on one worker (with NCCL backend)? Without resetting everything, which is somewhat expansive.
After all, in my case the connection is not lost, it’s simply halted with OOM. The ideal scenario could be:
(assuming a bad case where bucket 1 gradients already reduced, and stuck on bucket2 with worker#0 OOM)
worker#0 rerun another batch and only sync bucket2 + later, to “catch up” with the rest.
worker#0 continue to sync while only sending 0.0 gradients without further computation.
I imagine solution#1 is very tricky if not impossible, whereas solution#2 is very much feasible?
If so, can you point me some directions to make it work? (files, watch-outs, etc.) Thanks! |
st178736 | If the collective operation fails (with an OOM or other exception) and you are able to catch it you may be able to reissue the operation. Whether this works or not depends on whether the distributed backend is still in a good state or not. With NCCL once the state goes bad you have to destroy the process group and re initialize it. You could try to salvage your processes but you’d have to call destroy and initialize on all your workers together. There may be other states in your application that goes out of sync whenever there is an exception only observed in a subset of your workers. if you are able to recover that state you can try to restore the workers into a well defined point in time. Based on what we’ve observed with distributed applications it’s non trivial to restore the full application state (ddp + user code) in a distributed setting and the cleanest restore is to tear the processes down and restore from a checkpoint. This is why elastic was designed as such. Trade off is restart overhead versus correctness and maintainability. Note that with elastic you are simply restarting your worker processes so the penalty you pay is your initialization time (loading the model, allocating mem for data, etc) and not actually restarting the node or container. |
st178737 | Currently, we have Lightning 30 and Ignite 24 as a high-level library to help with training neural networks in PyTorch. Which of them is easier to train in a multi GPU environment? |
st178738 | I have used PyTorch Lightning. (While I can’t compare the two, as I haven’t used Ignite).
It has been the smoothest experience as far as I have come across, w.r.t multi-GPU training. Changing from a single GPU to a multi-GPU setup is as simple as setting num_gpus in trainer.fit() to as many as you’d like to use.
TPU support is also integrated, where you’d just specify num_tpu_cores, without changing any code. |
st178739 | Thank you very much for your contribution.
I started using Ignite after read the exciting quickstart 44 guide showing the essentials for define and training a simple model. But the provided examples which use multi GPU training do not seem to follow the same simplicity.
Currently, the stable (v0.3.0) release relies only on native PyTorch distributed API where users need to manually set up distributed proc group, wrap model with nn.parallel.DistributedDataParallel 4 and execute the script with torch.distributed.launch tool, or use mp.spawn. |
st178740 | I would go with Lightning then.
The documentation is pretty clear and readable.
https://pytorch-lightning.readthedocs.io/en/0.7.6/multi_gpu.html 138 |
st178741 | I am training Albert language model using huggingface transformer. While training I notice that on my p3dn instance,gpu 0 is almost completely used but others have around 50% ram unused. I am getting only 85 batch size on this system and above this OOM.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:16.0 Off | 0 |
| N/A 77C P0 291W / 300W | 30931MiB / 32510MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:00:17.0 Off | 0 |
| N/A 71C P0 255W / 300W | 18963MiB / 32510MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-SXM2... On | 00000000:00:18.0 Off | 0 |
| N/A 71C P0 95W / 300W | 18963MiB / 32510MiB | 98% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-SXM2... On | 00000000:00:19.0 Off | 0 |
| N/A 68C P0 89W / 300W | 18963MiB / 32510MiB | 72% Default |
+-------------------------------+----------------------+----------------------+
| 4 Tesla V100-SXM2... On | 00000000:00:1A.0 Off | 0 |
| N/A 68C P0 78W / 300W | 18963MiB / 32510MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 5 Tesla V100-SXM2... On | 00000000:00:1B.0 Off | 0 |
| N/A 69C P0 96W / 300W | 18963MiB / 32510MiB | 65% Default |
+-------------------------------+----------------------+----------------------+
| 6 Tesla V100-SXM2... On | 00000000:00:1C.0 Off | 0 |
| N/A 69C P0 79W / 300W | 18963MiB / 32510MiB | 95% Default |
+-------------------------------+----------------------+----------------------+
| 7 Tesla V100-SXM2... On | 00000000:00:1D.0 Off | 0 |
| N/A 74C P0 80W / 300W | 18963MiB / 32510MiB | 12% Default |
+-------------------------------+----------------------+----------------------+
I was using default setting for it using data parallel.
I tried distributed training also using python -m torch.distributed.launch --nproc_per_node 8 test_lm.py but It started new job for each and every GPU.
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
category=FutureWarning,
/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
category=FutureWarning,
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
category=FutureWarning,
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
category=FutureWarning,
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
category=FutureWarning,
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
category=FutureWarning,
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
category=FutureWarning,
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
category=FutureWarning,
Can anyone suggest what I should do for efficient training? |
st178742 | Looks like other processes have might stepped into cuda:0. Have you tried setting CUDA_VISIBLE_DEVICES to make sure that each process only sees one GPU? |
st178743 | PIDS for each device
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 23094 C python 26309MiB |
| 1 23094 C python 14341MiB |
| 2 23094 C python 14341MiB |
| 3 23094 C python 14341MiB |
| 4 23094 C python 14341MiB |
| 5 23094 C python 14341MiB |
| 6 23094 C python 14341MiB |
| 7 23094 C python 14341MiB |
+-----------------------------------------------------------------------------+
Using distributed training will help here? |
st178744 | I have a similar problem. I used DistributedDataParallel and python -m torch.distributed.launch --nproc_per_node=8.
image1300×784 89.4 KB |
st178745 | Hey @Tyan
The figure you shared looks a little different from the one @karan_purohit attached. Looks like all processes step into cuda:0, which could happen if they use cuda:0 as the default device and then some tensors/context were unintentionally created there. E.g., when you call empty_cache() without a device context, or create some cuda tensor without specifying device affinity.
Can you try setting CUDA_VISIBLE_DEVICES for all processes so that each process exclusively works on one device? |
st178746 | Hey @karan_purohit
Looks like there is only one process using GPUs in your application while there should be 8 processes? Did you create DDP instances with proper device ids in all processes? Could you please share a min snippet of your Python script that can reproduce this behavior? |
st178747 | Hi, @mrshenli
Thanks a billion for your reply. I didn’t set CUDA_VISIBLE_DEVICES, I set env as follows:
def train(args):
torch.backends.cudnn.benchmark = True
dist.init_process_group('nccl')
torch.cuda.set_device(args.local_rank)
device = torch.device('cuda', args.local_rank)
....
model = model.to(device)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank)
criterion = criterion.to(device)
...
lr_imgs = lr_imgs.to(device)
hr_imgs = hr_imgs.to(device)
simiar settings in another model training, gpu utilization is balance. It’s very strange.
I tried set CUDA_VISIBLE_DEVICES , it didn’t work. |
st178748 | Hi @Tyan
How did you set CUDA_VISIBLE_DEVICES? Is it sth like os.environ["CUDA_VISIBLE_DEVICES"]=f"{args.local_rank}" in every individual process before running any cuda related code?
Besides, can you try swapping the order of the following two lines? I am not 100% sure, but ProcessGroupNCCL might create CUDA context on the default device.
dist.init_process_group('nccl')
torch.cuda.set_device(args.local_rank) |
st178749 | @mrshenli
Yeah. I have tried what you said. It didn’t work. Current setting:
def train(args):
# Env
os.environ["CUDA_VISIBLE_DEVICES"] = str(args.local_rank)
torch.backends.cudnn.benchmark = True
torch.cuda.set_device(args.local_rank)
dist.init_process_group('nccl')
device = torch.device('cuda', args.local_rank) |
st178750 | Hi, @mrshenli
I checked the code again. I found in utils.py, some variables occupy the gpu:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Some constants
rgb_weights = torch.FloatTensor([65.481, 128.553, 24.966]).to(device)
imagenet_mean = torch.FloatTensor([0.485, 0.456, 0.406]).unsqueeze(1).unsqueeze(2)
imagenet_std = torch.FloatTensor([0.229, 0.224, 0.225]).unsqueeze(1).unsqueeze(2)
imagenet_mean_cuda = torch.FloatTensor([0.485, 0.456, 0.406]).to(device).unsqueeze(0).unsqueeze(2).unsqueeze(3)
imagenet_std_cuda = torch.FloatTensor([0.229, 0.224, 0.225]).to(device).unsqueeze(0).unsqueeze(2).unsqueeze(3)
Very sorry to trouble you. |
st178751 | Hi,
For single node, I set
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
and the size is as input parameter.
However, with multiple nodes, we have to set differently. But I did now know how to set it?
For example, I know the node names with 4 nodes as below.
C1-01
C1-02
C2-01
C2-02
When I submit the job, the node names will change.
How to set MASTER_ADDR for the program?
Thanks, |
st178752 | Solved by mrshenli in post #13
Thanks for reporting, it’s an error in the doc, I think it needs to be:
rpc.ProcessGroupRpcBackendOptions(
num_send_recv_threads=16,
rpc_timeout=datetime.timedelta(seconds=1000)
)
Let me try. |
st178753 | Hey @ph0123 do you mean before submitting jobs, neither node name nor node IP addresses are known to you? |
st178754 | Hi,
I submit the batch file. In the batch file, I can get the node names of all nodes, which assigned by the server.
For each submission, the node names will be changed.
I also have another question. When I run with real data (big data), the gloo backend is stopped after 60 seconds in my program.
How to set up the time out for this situation. The program will run with 20-30 minutes.
Thanks, |
st178755 | ph0123:
I submit the batch file. In the batch file, I can get the node names of all nodes, which assigned by the server.
For each submission, the node names will be changed.
In that case, I wonder if it would be possible to programmably figure out the master. E.g., ask all processes to sort all node names and then always use the first one?
How to set up the time out for this situation. The program will run with 20-30 minutes.
Are you using RPC or DDP? For RPC, checkout this PR (https://github.com/pytorch/pytorch/pull/38577 2). For DDP, init_process_group does take a timeout argument. |
st178756 | The 60 second timeout might be the default RPC timeout. Which version of PyTorch are you using?
In v1.4, there is a hidden _set_rpc_timeout API.
In v1.5, you can customize the ProcessGroupRpcBackendOptions 1 to provide a timeout. |
st178757 | We will be adding per-RPC timeout in v1.6. See
https://github.com/pytorch/pytorch/issues/32686 1
https://github.com/pytorch/pytorch/issues/36000 1 |
st178758 | Thanks,
For multiple nodes, I think I can print out to a file. The first name node is the master node.
Let me try the timeout. |
st178759 | HI,
I try to set the time out as in the git. https://github.com/pytorch/pytorch/issues/32686
rpc.rpc_async(my_target, add_outlinks, args=(arr_send[i],source),timeout=None)
The error:
TypeError: rpc_async() got an unexpected keyword argument 'timeout'
Perhaps, rpc is not have the “timeout” parameter.
Thanks, |
st178760 | That per-RPC timeout is only for v1.6 which hasn’t been released yet. The feature available on master now though, but you will need to either use nightly binary or build from source to get that feature. If you are using v1.4 or v1.5, please try the other two options mentioned above.
What does the following code print in your environment?
import torch
torch.__version__ |
st178761 | Cool, then below should be the way to go. The link below points to the doc that contains an example.
In v1.5, you can customize the ProcessGroupRpcBackendOptions to provide a timeout. |
st178762 | Yes,
rpc.init_rpc(my_name, rank=rank, world_size=size, rpc_backend_options=rpc.ProcessGroupRpcBackendOptions(num_send_recv_threads=16,datetime.timedelta(seconds=1000))) # initial_rpc
Output:
rpc.init_rpc(my_name, rank=rank, world_size=size, rpc_backend_options=rpc.ProcessGroupRpcBackendOptions(num_send_recv_threads=16,datetime.timedelta(seconds=1000))) # initial_rpc
^
SyntaxError: positional argument follows keyword argument |
st178763 | Thanks for reporting, it’s an error in the doc, I think it needs to be:
rpc.ProcessGroupRpcBackendOptions(
num_send_recv_threads=16,
rpc_timeout=datetime.timedelta(seconds=1000)
)
Let me try. |
st178764 | import datetime, os
from torch.distributed import rpc
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
rpc.init_rpc(
"worker",
rank=0,
world_size=1,
rpc_backend_options=rpc.ProcessGroupRpcBackendOptions(
num_send_recv_threads=16,
rpc_timeout = datetime.timedelta(seconds=1000) # note that this will change to float type to support TorchScript integration.
)
)
rpc.shutdown()
Yep, above should work. Will submit a PR to fix. |
st178765 | github.com/pytorch/pytorch
[Easy Review] Fix ProcessGroupRpcBackendOptions Doc 7
pytorch:gh/mrshenli/193/base ← pytorch:gh/mrshenli/193/head
opened
Jun 10, 2020
mrshenli
+1
-1 |
st178766 | I have trained my model with 4 GPU by DDP (torch.nn.parallel.DistributedDataParallel)
But I wanna load the model for inference on machine with only single GPU , the codes are listed as below:
m = Model()
m.load_state_dict(torch.load('model.pth'))
it raises errors listed as below
RuntimeError: Error(s) in loading state_dict for ObjectDetector:
Missing key(s) in state_dict: "backbone.block1.0.block.0.weight", "backbone.block1.0.block.0.bias", "backbone.block1.0.block.1.weight", "backbone.block1.0.block.1.bias", "backbone.block1.0.block.1.running_mean", "backbone.block1.0.block.1.running_var", "backbone.block1.1.block.0.weight", "backbone.block1.1.block.0.bias", "backbone.block1.1.block.1.weight", "backbone.block1.1.block.1.bias", "backbone.block1.1.block.1.running_mean", "backbone.block1.1.block.1.running_var", "backbone.block1.2.conv_block1.block.0.weight", "backbone.block1.2.conv_block1.block.0.bias", "backbone.block1.2.conv_block1.block.1.weight", "backbone.block1.2.conv_block1.block.1.bias", "backbone.block1.2.conv_block1.block.1.running_mean", "backbone.block1.2.conv_block1.block.1.running_var", "backbone.block1.2.conv_block2.block.0.weight", "backbone.block1.2.conv_block2.block.0.bias", "backbone.block1.2.conv_block2.block.1.weight", "backbone.block1.2.conv_block2.block.1.bias", "backbone.block1.2.conv_block2.block.1.running_mean", "backbone.block1.2.conv_block2.block.1.running_var", "backbone.block2.0.block.0.weight", "backbone.block2.0.block.0.bias", "backbone.block2.0.block.1.weight", "backbone.block2.0.block.1.bias", "backbone.block2.0.block.1.running_mean", "backbone.block2.0.block.1.running_var", "backbone.block2.1.conv_block1.block.0.weight", "backbone.block2.1.conv_block1.block.0.bias", "backbone.block2.1.conv_block1.block.1.weight",
.......
"backbone.block5.1.conv_block1.block.1.running_var",
I have tried
m.load_state_dict(torch.load('model.pth'), strict=False)
But the inference results are very strange not as expected
How should I fix it? thanks |
st178767 | Solved by Wilson_Ho in post #2
I have resolved the issues. It’s related to 1686
state_dict = torch.load(weight_path)
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:] # remove 'module.' of DataParallel/DistributedDataParallel
new_state_dict[name] = v
m.load… |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.