id
stringlengths
3
8
text
stringlengths
1
115k
st178868
Thanks for your reply! Here is my short code: def main(rank, dev_id, args): torch.distributed.init_process_group(backend="nccl", init_method='tcp://localhost:22', world_size=args['num_devices'], rank=dev_id) model = mymodel.to(dev_id) optimizer = optim.Adam(model.parameters(), lr=args['lr']) for epochs: pred = model(inputs) loss = criterion(pred, label) optimizer.zero_grad() loss.backward() for param_group in optimizer.param_groups: for p in param_group['params']: if p.requires_grad and p.grad is not None: # print(p.grad.data.shape, p.grad.data.device) Ps. We can get grad information here dist.all_reduce(p.grad.data, op=dist.ReduceOp.SUM) p.grad.data /= n_processes optimizer.step() torch.distributed.barrier() mp = torch.multiprocessing.get_context('spawn') for id, device_id in enumerate(devices): procs.append(mp.Process(target=main, args=(id, device_id, args), daemon=True)) procs[-1].start() for p in procs: p.join() The pytorch version I’m using is 1.4.0 and all my codes are running in AWS instance. Ps. The port of init_method can only be 22 which is strange otherwise it got Runtime Error like this: File “/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/distributed/rendezvous.py”, line 120, in _tcp_rendezvous_handler store = TCPStore(result.hostname, result.port, world_size, start_daemon) RuntimeError: connect() timed out. Thanks for your time and reading!
st178869
The error has been fixed. ‘Stop_waiting response is expected’ error occurred in TCPStore.cpp. So it was actually the communication problem. It works finally when I reinstalled NCCL: https://github.com/NVIDIA/nccl.git 83
st178870
Dear @mrshenli, I have noticed that your team/colleague released a new tutorial on the parameter server using the RPC framework (rpc_param_server_tutorial 1). I really appreciate the example with detailed and helpful explanations, and it seems to me that it can work with multiple trainers accessing to the same parameter server. I think the code below makes sure there is only one parameter server can be created by the trainers. # The global parameter server instance. param_server = None # A lock to ensure we only have one parameter server. global_lock = Lock() def get_parameter_server(num_gpus=0): """ Returns a singleton parameter server to all trainer processes """ global param_server # Ensure that we get only one handle to the ParameterServer. with global_lock: if not param_server: # construct it once param_server = ParameterServer(num_gpus=num_gpus) return param_server def run_parameter_server(rank, world_size): # The parameter server just acts as a host for the model and responds to # requests from trainers. # rpc.shutdown() will wait for all workers to complete by default, which # in this case means that the parameter server will wait for all trainers # to complete, and then exit. print("PS master initializing RPC") rpc.init_rpc(name="parameter_server", rank=rank, world_size=world_size) print("RPC initialized! Running parameter server...") rpc.shutdown() print("RPC shutdown on parameter server.") However, when it comes to Distributed Autograd, forward, and back passes using the training loop below: def run_training_loop(rank, num_gpus, train_loader, test_loader): ... for i, (data, target) in enumerate(train_loader): with dist_autograd.context() as cid: model_output = net(data) target = target.to(model_output.device) loss = F.nll_loss(model_output, target) if i % 5 == 0: print(f"Rank {rank} training batch {i} loss {loss.item()}") dist_autograd.backward(cid, [loss]) # Ensure that dist autograd ran successfully and gradients were # returned. assert remote_method( ParameterServer.get_dist_gradients, net.param_server_rref, cid) != {} opt.step(cid) print("Training complete!") print("Getting accuracy....") get_accuracy(test_loader, net) How would I make sure there are no concurrency issues? For example, if you have two trainers and have a situation where one trainer is doing the forward propagation and the other is doing the backward pass, how to make sure the two processes are not conflicting with each other? Thanks,
st178871
Hey @ZiyiZhu That’s a great question! I have two comments on this concurrent param updating approach: Each trainer has its dedicated gradients on the parameter server. When using distributed autograd, the computed gradients are stored in the autograd context (instead of in param.grad), which can be identified by the unique cid. So there is no concern on race errors for gradient computation. It is true that there can be multiple trainers updating the same parameter concurrently, and it is true that the parameter might change after a trainer computes the gradients and before it applies the gradients to the parameter. This idea is partially borrowed from the Hogwild! paper 1. And the approach of using “not perfectly up-to-date gradients” can also be found in other projects (e.g., PipeDream 2). In general, this is a trade-off between model accuracy and training speed. And in practice, we saw several use cases where this helped to accelerate training a lot with little or none accuracy penalty. cc @rvarm1
st178872
Dear @mrshenli, Thank you very much for your explanations with suggested paper references. I will follow up if I still have more questions on the RPC when I finish reading the papers. Best, Ziyi
st178873
Dear @mrshenli, I have briefly gone through the PipeDream 1 paper. Now I understand better how this parameter RPC example can be implicitly pipelined which speeds up the training phase for model parallelism. However, I still have several questions below and hope you could help to answer: This rpc_parameter_example seems to me not a strict parameter server strategy for data parallelism training. Multiple trainers can update the “parameter server” however that is done separately by the distributed Autograd which means there is no averaging for the gradients of each trainer, IIUC? Is a typical parameter server strategy looking more like from Scaling Distributed paper instead? If I want to do this parameter server strategy (averaging the gradients of each trainer ) for data-parallel , is it possible to do that with RPC? I remember last time you mentioned that by default the Distributed Data Parallel (DDP) uses ring allreduce for averaging the gradients. Is ring allreduce better than parameter server all the time? But can the DDP use parameter server strategy instead? In the PipeDream 1 paper, the workload can be fine-grained and controlled as below: Would RPC be able to do the same? Or what would be the order/priority for the RPC to choose running which job (1,2,3,4 forward or 1,2,3,4 backward) on the GPU? Thank you very much! Best,
st178874
I was referring to this part of the PipeDream paper: Each backward pass in a stage results in weight updates; the next forward pass uses the latest version of weights available, and “stashes" a copy of these weights to use during the corresponding backward pass. Although the forward pass will not see updates from incomplete in-flight mini-batches, learning is still effective because model weights change relatively slowly and bounded staleness has been found effective in improving training speeds And the Hogwild paper linked above also mentioned sth with similar no-lock spirit: In this work, we propose a simple strategy for eliminating the overhead associated with locking: run SGD in parallel without locks, a strategy that we call Hogwild!. In Hogwild!, processors are allowed equal access to shared memory and are able to update individual components of memory at will. Such a lock-free scheme might appear doomed to fail as processors could overwrite each other’s progress. However, when the data access is sparse, meaning that individual SGD steps only modify a small part of the decision variable, we show that memory overwrites are rare and that they introduce barely any error into the computation when they do occur. We demonstrate both theoretically and experimentally a near linear speedup with the number of processors on commonly occurring sparse learning problems. In general, if lock is necessary (e.g., due to unacceptable accuracy drop), applications can do so by explicitly acquiring locks, but this will certainly have impact on training speed. So it is up to the application to decide how to play with the trade-off. Is ring allreduce better than parameter server all the time? No, the merit of allreduce is that it can (actually depend on the loss function) achieve mathematical equivalence with local training. But whether synchronous training is better than asynchronous training is an open question. DistributedDataParallel in PyTorch is using allreduce, but there are also that types of DDP, e.g., SlowMo 2 is using gossip. But can the DDP use parameter server strategy instead? Yes, it certainly can. The holgwild paper linked above can be one example of using parameter-server-based data parallel on the “decision variable”, e.g., an embedding table. Currently the RPC package does not support priority-based communication/execution yet, but this might be doable from the application side. The callee can maintain priority queues to order incoming tasks and use an RPC argument to indicate the priority. The WIP async user function 2 will help to reduce the callee-side overhead for this use case.
st178875
Hi @mrshenli, Thank you very much again for the provided explanations and references. I will look into them and hopefully have efficient implementations of DDP and RPC in Pytorch. Best, Ziyi
st178876
Dear @mrshenli, Following up on the Hogwild! paper mentioned previously. I also found out that Pytorch has the PytorchHogwild example using multiprocessing techniques. I would want to redo this and then extend it for multi-machine and distributed training, either using existing Pytorch DDP or custom design built upon distributed communication APIs. The big picture is shown below: image737×555 20.2 KB Within the Machine#0 !Hogwild can be performed. Does this make sense to you? However, when I am making a simple example to test the torch.multiprocessing first locally, it seems that multiprocessing cannot work. Please see the result below: import os import torch from torch import nn import torch.distributed as dist import torch.multiprocessing as mp device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(torch.__version__) >> 1.4.0 class Net(nn.Module): def __init__(self): super().__init__() self.out = nn.Linear(in_features=5, out_features=1) def forward(self, t): # output t = self.out(t) return t def addNet(model): for para in model.parameters(): tmp = torch.ones_like(para.data) para.data = para.data + tmp print(para.data) torch.manual_seed(101) model = Net() model.share_memory() for para in model.parameters(): print(para.data) >>> tensor([[-0.2701, -0.0445, -0.3659, 0.3463, -0.1884]]) >>> tensor([-0.4306]) num_processes = 2 processes = [] for rank in range(num_processes): p = mp.Process(target=addNet, args=(model,)) p.start() processes.append(p) for p in processes: p.join() >>> tensor([[0.7299, 0.9555, 0.6341, 1.3463, 0.8116]]) >>> tensor([[0.7299, 0.9555, 0.6341, 1.3463, 0.8116]]) >>> tensor([0.5694]) >>> tensor([0.5694]) for para in model.parameters(): print(para.data) >>> tensor([[-0.2701, -0.0445, -0.3659, 0.3463, -0.1884]]) >>> tensor([-0.4306]) It seems to me that the children processes just made a copy of the NN in the parent process and the NN was not being shared among the parent and two children processes. Did I miss anything from PytorchHogwild? Or it will be very much appreciated if you could share any thoughts. Best, Ziyi
st178877
Hey @ZiyiZhu did you use the “spawn” mode? If you run the example as is, does it work?
st178878
Hi @mrshenli It seems not. Even the addNet is not printing anything. image1446×696 31.9 KB
st178879
I sometimes run into weird errors when using multiprocessing in notebook. Does it work if you directly launch the script from command line? Let me try that locally.
st178880
Oh I see, you need to use the inplace add_, sth like: import os import torch from torch import nn import torch.distributed as dist import torch.multiprocessing as mp device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(torch.__version__) class Net(nn.Module): def __init__(self): super().__init__() self.out = nn.Linear(in_features=5, out_features=1) def forward(self, t): # output t = self.out(t) return t def addNet(model): for para in model.parameters(): tmp = torch.ones_like(para.data) with torch.no_grad(): para.add_(tmp) print(para.data) if __name__=="__main__": mp.set_start_method('spawn') torch.manual_seed(101) model = Net() model.share_memory() for para in model.parameters(): print(para.data) num_processes = 2 processes = [] for rank in range(num_processes): p = mp.Process(target=addNet, args=(model, )) p.start() processes.append(p) for p in processes: p.join() for para in model.parameters(): print(para.data)
st178881
Hi @mrshenli, Thank you very much for running the code on your end. Yes, I tested and it can work by launching the script from command line. However, the Jupter notebook cannot work, which is interesting. Going back to the first question, do you see any potential problems with this? ZiyiZhu: The big picture is shown below: Thank you! Best, Ziyi
st178882
Hii @ZiyiZhu, I think the code would run, and it’s like using multiple NCCL allreduce in a HogWild manner, but I am not confident on the correctness on DDP in this use case. The allreduce is not an atomic operation, it needs to go through the ring (say we are using ring allreduce here) twice to collect and propagate the values. So, it is possible that, after allreduce, model parameters on different processes in the same group are no longer in sync. This breaks DDP’s assumption, and this gap could get larger over more iterations. You might need some extra code to re-sync DDP model parameters every n iterations.
st178883
I guess we can only find out by training some real models with this scheme. Async training is full of surprises.
st178884
I see, and thank you very much for the insights. I would try to re-sync after some iterations or maybe use other schemes such as ps-worker instead. Best,
st178885
Hi everyone! I am a beginner in PyTorch. Now I want to divide a dataset into two parts: the train set and validation set when using torch.distributed. I know that on a single GPU I can do this using a sampler: indices = list(range(len(train_data))) train_loader = torch.utils.data.DataLoader( train_data, batch_size=args.batch_size, sampler=torch.utils.data.sampler.SubsetRandomSampler(indices[:split]), pin_memory=True, num_workers=2) But when I want to train it in a parallel way using torch.distributed, I have to use another sampler, namely, sampler = torch.utils.data.distributed.DistributedSampler(train_data) So how should I do to use the two samplers, so that I can divide the dataset and distribute it at the same time? Thank you very much for any help!
st178886
Solved by sunshk1227 in post #2 Yeah, I find a solution with the help of Szymon Maszke. Use torch.utils.data.random_split instead. Namely, train_data, val_data = torch.utils.data.random_split( train_data, (num_train, num_val))
st178887
Yeah, I find a solution with the help of Szymon Maszke. Use torch.utils.data.random_split instead. Namely, train_data, val_data = torch.utils.data.random_split( train_data, (num_train, num_val))
st178888
I try to use distributed_rpc to implement parameter server. I would like to use zero_grad and lr_scheduling feature in the trainer. But it seems that the DistributedOptimizer does not support this. Is there any workaround?
st178889
Solved by mrshenli in post #2 Hey @Kunlin_Yang zero_grad The reason DistributedOptimizer does not provide a zero_grad API is because the gradients of each backward pass is stored in its own dedicated context (instead of in param.grad), and the context will be cleared when exiting the with dist_autograd.context() as context_id: …
st178890
Hey @Kunlin_Yang zero_grad The reason DistributedOptimizer does not provide a zero_grad API is because the gradients of each backward pass is stored in its own dedicated context (instead of in param.grad), and the context will be cleared when exiting the with dist_autograd.context() as context_id: scope. So zero_grad is not needed here. We can certainly add it if necessary, and it won’t be too hard to implement it in the application code either. E.g., def zero_grad(pr, context_id): dist_autograd.get_gradients(context_id)[pr.local_value()].zero_() with dist_autograd.context() as context_id: # omitting forward-backward-optstep here futs = [] for pr in param_rrefs: futs.append(rpc.rpc_async(pr.owner(), zero_grad, args=(pr, context_id))) [fut.wait() for fut in futs] Or is there a different reason you would like to use the zero_grad API? lr_scheduling Currently, there is no distributed implementation or lr scheduling yet. I created an issue to track this: https://github.com/pytorch/pytorch/issues/38548 8 For now, you will need to do that using the raw RPC API. You can access the RRefs of the remote optimizers through DistributedOptimizer().remote_optimizers, so it can be sth like: def create_lr_schheduler(opt_rref): # create and return lr_schheduler def lrs_step(lrs_rref): lrs_rref.local_value().step() opt = DistributedOptimizer(...) lrs_rrefs = [] for opt_rref in opt.remote_optimizers: lrs_rrefs = rpc.remote(opt_rref.owner(), create_lr_schheduler, args=(opt_rref,)) with dist_autograd.context() as context_id: # omitting forward-backward-optstep here futs = [] for lrs_rref iin lrs_rrefs: futs.append(rpc.rpc_async(lrs_rref.owner(), lrs_step, args=(lrs_rref,))) [fut.wait() for fut in futs] If you are using master branch, the above code can be simplified with RRef.rpc_async() 1 API.
st178891
I am using torch.distributed.rpc library and the RPC agent crashes if I don’t do any RPC call for 30 minutes. Is this an expected behavior of the RPC library? I think it crashes because it thinks something went wrong when there is no RPC traffic for 30 minutes; however, it times out and crashes even when there was no RPC traffic because my code was written that way! Below is a code snippet which can reproduce the problematic behavior. If waiting for 30 minutes is too long to test, you can change the hardcoded value in /rpc/backend_registry.py file (detailed in the comment). import torch.distributed.rpc as rpc from torch.multiprocessing import Process import os import time def do_nothing(): pass def test(rank, size): rpc.init_rpc("Rank"+str(rank), rank=rank, world_size=size) print("Rank %s rpc init" % rank) i = 0 # To test easily, I changed <PATH_TO_TORCH_LIB>/torch/distributed/rpc/backend_registry.py. # Under def _process_group_init_backend_handler(), # I changed the below line # >> process_group_timeout = rpc_constants.DEFAULT_PROCESS_GROUP_TIMEOUT # (which makes the timeout 30 minutes), to somewhat shorter value, e.g., # >> process_group_timeout = datetime.timedelta(seconds=10). # Otherwise, if I wait for 30 min the problem still occurs. ## Loop that does not do anything for a long time... while i < 10: time.sleep(1) print("Rank %s %s sec passed..." % (rank, i)) ## Uncommenting the below two lines makes the crash go away! ## I.e., generating some RPC traffic. #target = rank ^ 0x1 #rpc.rpc_sync("Rank"+str(target), do_nothing) i += 1 rpc.shutdown() print("Rank %s rpc shutdown" % rank) pass if __name__ == "__main__": os.environ['MASTER_ADDR'] = "localhost" os.environ['MASTER_PORT'] = "29502" processes = [] for rank in [0,1]: p = Process(target=test, args=(rank, 2, )) p.start() processes.append(p) for p in processes: p.join() The error message is: [E process_group_agent.cpp:664] Encountered exception in ProcessGroupAgent::listenLoop(): [/pytorch/third_party/gloo/gloo/transport/tcp/unbound_buffer.cc:84] Timed out waiting 10000ms for recv operation to complete on worker 1. This means that the RPC agent is in an unhealthy state and unusable. I wonder if this is a bug, an expected behavior, or if I am using the API in an incorrect way. If it is an expected behavior, is there any workaround? I am mainly experiencing this behavior because my code has a process that does not do any RPC calls, but instead calls functions under pytorch.distributed, such as distributed.all_reduce(). I first tried not initializing rpc at all in those processes, but instead calling distributed.init_process_group(). However, this made rpc.init_rpc() calls in other processes to hang or crash; I am suspecting that the problem happens because rpc.init_rpc() calls distributed.init_process_group() internally and somehow they don’t play well when some processes call init_process_group() via init_rpc() and others call it directly… If rpc timing out after 30 min is an expected behavior, maybe I need to find a way to make some processes to call rpc.init_rpc() and others to call distributed.init_process_group() without failure. Thank you in advance.
st178892
Solved by mrshenli in post #4 I see, sorry that I misread the original question. In summary, is it normal to see the error message I posted if I call init_rpc and don’t use it for 30 minutes? This is indeed a bug, and I think it is due to the following code, where the ProcessGroup RPC agent’s recvAnysource timed out. We shou…
st178893
Hey @kmaeng, there are actually RPC activity for graceful RPC shutdown. See the code below. It’s basically using an RPC to prevent the idle process from exiting too early. github.com pytorch/pytorch/blob/9d0e935b489a33b87711b9d5d3525a7282ab89c1/torch/distributed/rpc/api.py#L133-L192 3 @_require_initialized def _wait_all_workers(): r""" Block until all local and remote RPC processes reach this method and wait for all outstanding work to complete. Every RPC process must call this method before exit to perform a graceful shutdown. This should be used to terminate the RPC framework, and there is no guarantee that the RPC framework will work after this method returns. """ assert ( _ALL_WORKER_NAMES is not None ), "`_ALL_WORKER_NAMES` is not initialized for `def _wait_all_workers`." leader_worker_name = sorted(_ALL_WORKER_NAMES)[0] self_worker_name = _get_current_rpc_agent().get_worker_info().name global _wait_all_workers_sequence_id with _wait_all_workers_dict_lock: sequence_id = _wait_all_workers_sequence_id _wait_all_workers_sequence_id += 1 This file has been truncated. show original To get around this, you can increase the default RPC timeout. Depending on the version you are using, you can either provide the timeout value in init_rpc 8 (with v1.5), or directly call rpc._set_rpc_timeout 7 (with v1.4). Or if you know for sure when a process can safely exit, you can use shutdown(graceful=False) and do the termination detection in application code. Per op (rpc_sync/rpc_async/remote) timeout are coming soon.
st178894
Thank you for your response. I don’t fully understand your answer. Can you clarify some points? I don’t understand what you are trying to show with the code you linked. Are you saying I can use the _wait_all_workers() or a snippet of the code inside to work around the issue (I am already calling rpc.shutdown() at the end that internally calls this. It is just that the processes dies before reaching here)? My main issue is that the rpc process group agent is killed after 30 minutes of being idle. Are you suggesting I just let it die and do graceful=False so that other processes do not die while trying to shutdown rpc? I am using v1.5 and tried giving timeout value to init_rpc, but did not work. I followed the code and figured the 30 min timeout only changes when I change this line: https://github.com/pytorch/pytorch/blob/91f451a5e69d2969d730744e98e059d05e63a84d/torch/distributed/rpc/backend_registry.py#L114 3 As you can see, that line and below takes the rpc_constants.DEFAULT_PROCESS_GROUP_TIMEOUT and calls dist.init_process_group(), without using the provided value by me. From my testing, that value was generating the error I am keep seeing (afterward, when calling ProcessGroupAgent, the timeout value I provided is getting passed, but not for the dist.init_process_group()). In summary, is it normal to see the error message I posted if I call init_rpc and don’t use it for 30 minutes? Thank you for your help. I really appreciate it.
st178895
I see, sorry that I misread the original question. In summary, is it normal to see the error message I posted if I call init_rpc and don’t use it for 30 minutes? This is indeed a bug, and I think it is due to the following code, where the ProcessGroup RPC agent’s recvAnysource timed out. We should have passed rpc timeout to process group or set it to infinity in the listen loop. github.com pytorch/pytorch/blob/1f87f15ba3cd46fe5474f7783ff80ef06950e157/torch/csrc/distributed/rpc/process_group_agent.cpp#L710-L715 2 void ProcessGroupAgent::listenLoopInternal() { while (rpcAgentRunning_.load()) { // rank, tensor size, message type std::vector<torch::Tensor> preamble = {torch::empty({4}, {torch::kInt64})}; auto work = pg_->recvAnysource(preamble, pg_->getRank()); { Thanks for flagging this, do you want to create an issue on github to track this? We will fix. Thanks!
st178896
kmaeng: As you can see, that line and below takes the rpc_constants.DEFAULT_PROCESS_GROUP_TIMEOUT and calls dist.init_process_group(), without using the provided value by me. From my testing, that value was generating the error I am keep seeing (afterward, when calling ProcessGroupAgent, the timeout value I provided is getting passed, but not for the dist.init_process_group()). Yes, exactly, we have a tracking issue related to this problem: https://github.com/pytorch/pytorch/issues/33583 12
st178897
I submitted an issue (https://github.com/pytorch/pytorch/issues/38531 11), thanks!
st178898
Hi, I noticed that there is a large (~500MB) cuda context created per process on GPU. can see it simply by doing: import torch torch.randn(1,device=0) it takes 500MB (used to take 750MB in previous versions). when multiprocessing on same GPU this is a lot of unneeded memory. How can we work around this?
st178899
Do multiple processes have to work on the same set of GPUs? Can each process work on an exclusive set of GPUs and use CUDA_VISIBLE_DEVICES to control which devices they see?
st178900
I am not aware if there is a way to avoid the per-process CUDA context or reduce its size. @ptrblck and @albanD might know more.
st178901
I don’t think it’s possible to reuse a single CUDA context between processes, but haven’t looked deeply into it. We expect the best performance using a single process per GPU. What is your use case @seliad that you want to use multiple processes on the same device? Are you seeing any performance gains (regardless of the wasted memory)?
st178902
For processes: A,B Each has to do (1) distributed communication (e.g A–>C , B->D) (2) share parameters (A<->B) For the distributed communication I need different ranks (Im currently using cuda-aware MPI). Even if there is an option to use distributed communication with threads (i think that there is in mpi, not sure if Pytorch supports it), in python it is a pretty bad Idea.
st178903
I also noticed that when using 2 processes communicating through a Queue, The sender process (e.g sending from device0 to device1, with copy_) thholds this cuda contex on both devices. I created the buffer at the sender and sent it through a queue (reusing it) as recommended in the docs. (push communication model) Another option is to have a thread in the receiver waiting on that queue and pulling from it, I guess that won’t cause this extra memory? So, I wonder, maybe it could be (theoretically, and very partially) solved by creating all tensors in a single process (single owner), and sending them to all processes sharing the device.
st178904
Hi, I would like to put multiple models into CPU memory with shared memory support. So that I can easily transfer models among multiple processes. I am using code logic as following: ... a_list_of_models = load_models() for model in a_list_of_models: model.share_memory() ... The code works fine with 4-5 models, but if I launched more models, I got RuntimeError: unable to open shared memory object. I saw many other people have the same runtime error while using dataloader. While I am not using dataloader here, so I am not able to set the parameter num_worker to be 0. I have also checked the shared memory limitation on my machine is unlimited. And the physical memory is more than enough to hold 100 models. The most relevant topic is this one: How to configure shared memory size? 3 but, unfortunately, no answer there. any suggestions?
st178905
Hi, Could you possibly share a more comprehensive repro of the issue such as the models that you are loading, if possible? How did you check the shared memory limit on your machine? Does the output of ipcs -lm indicate that the shared memory is unlimited (this should tell you the max no. of SHM segments and max SHM size)? Also, just to confirm, have you checked if any other processes are taking up too much shared memory on your machine? To increase the shared memory limit, you can try setting the kernel.shmmax parameter (i.e. sysctl -w kernel.shmmax=...), and follow the steps here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/tuning_and_optimizing_red_hat_enterprise_linux_for_oracle_9i_and_10g_databases/sect-oracle_9i_and_10g_tuning_guide-setting_shared_memory-setting_shmmni_parameter to increase the # of shared memory segments available.
st178906
Hi Rohan, here is the code snippet for reproducing the bug: import torch from torchvision import models import time def main(): """""" model_list = [ ['resnet152', models.resnet152], ['inception_v3', models.inception_v3], ['vgg16', models.vgg16], ['vgg19', models.vgg19], ['vgg19_bn', models.vgg19_bn], ['densenet201', models.densenet201], ['densenet169', models.densenet169], ['resnet152-2', models.resnet152], ['resnet152-3', models.resnet152], ['resnet152-4', models.resnet152], ['resnet152-5', models.resnet152], ['resnet152-6', models.resnet152], ['resnet152-7', models.resnet152], ['resnet152-8', models.resnet152], ['resnet152-9', models.resnet152] ] models_dict = {} for m in model_list: model = m[1](pretrained=True) model.share_memory() models_dict[m[0]] = model print('loaded ', m[0]) # while True: # time.sleep(1) if __name__ == "__main__": main() Here is the outputs of ipcs -lm: ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 18014398509465599 max total shared memory (kbytes) = 18014398509481980 min seg size (bytes) = 1 I have tried to increase max number of segments to 8192 (using sysctl -w kernel.shmmni=8192), it does not help.
st178907
Another interesting fact: before the program got crash, I didn’t see any file created in /dev/shm folder. While the disk usage of /dev/shm is increasing (based on df -h command)
st178908
I am having the issue that everyone else has, where a model that uses BatchNorm has poorer accuracy when using DDP: According to this, I am suppose to patch Batch Norm somehow: github.com/Microsoft/human-pose-estimation.pytorch Why do you disable cudnn for batch_norm? 10 opened Aug 25, 2018 closed Nov 6, 2018 jin-s13 Thank you for releasing the code. The README reads that it is required to disable cudnn for batch_norm. Would you please... def monkey_patch_bn(): # print(inspect.getsource(torch.nn.functional.batch_norm)) def batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-5): if training: size = input.size() size_prods = size[0] for i in range(len(size) - 2): size_prods *= size[i + 2] if size_prods == 1: raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size)) return torch.batch_norm( input, weight, bias, running_mean, running_var, training, momentum, eps, False ) torch.nn.functional.batch_norm = batch_norm But I am not sure how to do it if my code is like this: def convbn(in_planes, out_planes, kernel_size, stride, pad, dilation, bn_running_avg=False): return nn.Sequential(nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=dilation if dilation > 1 else pad, dilation=dilation, bias=False), nn.BatchNorm2d(out_planes, track_running_stats=bn_running_avg))
st178909
Hi, Could you try disabling the CuDNN backend with: torch.backends.cudnn.enabled = False? According to posts such as Training performance degrades with DistributedDataParallel 35, can improve training. Also, have you given SyncBatchNorm (https://pytorch.org/docs/stable/nn.html#syncbatchnorm 41) a try? This will make batch statistics be computed across all GPUs in usage, instead of being computed separately for the batches passed to each device. (Note that as per the documentation, you’ll have to change your code to spawn a single process per-GPU if you’re not training that way already)
st178910
Hi, I’m struggling with Docker containerization seeming to negate the speedup of distributed training: GPU training with nn.parallel.DistributedDataParallel 2 processes with 1 GPU each are about 2x faster than 1 process with 1 GPU when run directly on a Google Compute Engine n1-standard-16 instance. Same 2 processes each in one Docker container with one GPU on Google Kubernetes Engine are slower than 1 process with 1 GPU, whether that process is in a container or not. Both containers are again on a single n1-standard-16 machine. Per-process batch size always = 3. I measure speed through the time taken to accumulate gradients to an equivalent batch size of 24. Should take 4 iterations with 2 GPUs or 8 with one. (if it’s relevant) using AMP with opt level O1. Slow communication due to containerization? Failure to use GPU-GPU communication? Containers: Image based on pytorch/pytorch:1.3-cuda10.1-cudnn7-devel Request 24 GB memory Seem to use more CPU than raw processes Use host network and IPC namespace for the init_process_group TCP initialization to work (is this the best way?) I tried: NCCL Gloo (bit slower than NCCL) Putting containers on 2 separate machines (quite a bit slower) NCCL initialization logs: NCCL INFO Bootstrap : Using [0]eth0:10.128.0.72<0> [1]cbr0:10.44.78.1<0> [2]vetha22648b8:fe80::286b:3cff:fef3:6eea%vetha22648b8<0> NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] NCCL INFO NET/Socket : Using [0]eth0:10.128.0.72<0> [1]cbr0:10.44.78.1<0> [2]vetha22648b8:fe80::286b:3cff:fef3:6eea%vetha22648b8<0> NCCL version 2.4.8+cuda10.1 NCCL INFO Bootstrap : Using [0]eth0:10.128.0.72<0> [1]cbr0:10.44.78.1<0> [2]vetha22648b8:fe80::286b:3cff:fef3:6eea%vetha22648b8<0> NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] NCCL INFO NET/Socket : Using [0]eth0:10.128.0.72<0> [1]cbr0:10.44.78.1<0> [2]vetha22648b8:fe80::286b:3cff:fef3:6eea%vetha22648b8<0> NCCL INFO Setting affinity for GPU 0 to ffff NCCL INFO Setting affinity for GPU 0 to ffff NCCL INFO Could not find real path of /sys/class/net/cbr0/device NCCL INFO include/net.h:19 -> 2 NCCL INFO Could not find real path of /sys/class/net/vetha22648b8/device NCCL INFO include/net.h:19 -> 2 NCCL INFO CUDA Dev 0[0], Socket NIC distance : PHB SYS SYS NCCL INFO Could not find real path of /sys/class/net/cbr0/device NCCL INFO include/net.h:19 -> 2 NCCL INFO Could not find real path of /sys/class/net/vetha22648b8/device NCCL INFO include/net.h:19 -> 2 NCCL INFO CUDA Dev 0[1], Socket NIC distance : PHB SYS SYS NCCL INFO Channel 00 : 0 1 NCCL INFO Ring 00 : 0 -> 1 [receive] via NET/Socket/0 NCCL INFO NET/Socket: Using 1 threads and 1 sockets per thread NCCL INFO Ring 00 : 1 -> 0 [send] via NET/Socket/0 NCCL INFO Ring 00 : 1 -> 0 [receive] via NET/Socket/0 NCCL INFO NET/Socket: Using 1 threads and 1 sockets per thread NCCL INFO Ring 00 : 0 -> 1 [send] via NET/Socket/0 NCCL INFO Using 256 threads, Min Comp Cap 7, Trees disabled NCCL INFO comm 0x7f478c0019e0 rank 0 nranks 2 cudaDev 0 nvmlDev 0 - Init COMPLETE NCCL INFO Launch mode Parallel NCCL INFO comm 0x7f82300019e0 rank 1 nranks 2 cudaDev 0 nvmlDev 1 - Init COMPLETE Any help would be greatly appreciated!
st178911
Hi, since you mentioned that 2 processes with 1 GPU each attains the expected speedup in the Google cloud env, it leads me to think that there may be a difference in configuration between that and the Docker env. Could you try the following and see if it works for you? Setting export OMP_NUM_THREADS=1 as explained in https://github.com/pytorch/pytorch/issues/22451 42 For your hypothesis about slower GPU to GPU communication, it may be worthwhile to debug which portions of the training are slower on the Docker instances. You can add instrumentation to determine which parts of training (initialization, data loading, forward, backward, etc) are slower than on Compute Engine. One way to do this would be to use the torch.cuda.Event.elapsed_time() API (https://pytorch.org/docs/stable/_modules/torch/cuda/streams.html#Event.elapsed_time 2) to record GPU computations (an example is available here: https://pytorch.org/docs/stable/notes/cuda.html#cuda-semantics 1)
st178912
Hello,I added data prefetching by using cuda stream like this: class data_prefetcher(): def __init__(self, loader): self.loader = iter(loader) self.stream = torch.cuda.Stream() self.mean = torch.tensor([0.485 * 255, 0.456 * 255, 0.406 * 255]).cuda().view(1,3,1,1) self.std = torch.tensor([0.229 * 255, 0.224 * 255, 0.225 * 255]).cuda().view(1,3,1,1) self.preload() def preload(self): try: self.next_input, self.next_target = next(self.loader) except StopIteration: self.next_input = None self.next_target = None return with torch.cuda.stream(self.stream): self.next_input = self.next_input.cuda(non_blocking=True) self.next_target = self.next_target.cuda(non_blocking=True) self.next_input = self.next_input.float() self.next_input = self.next_input.sub_(self.mean).div_(self.std) def next(self): torch.cuda.current_stream().wait_stream(self.stream) input = self.next_input target = self.next_target if input is not None: input.record_stream(torch.cuda.current_stream()) if target is not None: target.record_stream(torch.cuda.current_stream()) self.preload() return input, target (come from https://github.com/NVIDIA/apex/blob/master/examples/imagenet/main_amp.py#L256) to training logic,it works fine in single GPU training,but when moving to multiprocess context,I got error like this: File "/usr/lib/python3.6/multiprocessing/process.py", line 105, in start self._popen = self._Popen(self) File "/usr/lib/python3.6/multiprocessing/context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/usr/lib/python3.6/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/usr/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.6/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: can't pickle Stream objects After debuging,I found torch.cuda.streams.Stream triggered this exception,Question are: 1,Isn’t it possible to use cuda stream in torch.multiprocess context? 2,If not,any examples?
st178913
Hi @Alex_Luya If all you need is syncing streams across processes, you can use the ipc_handle() 12 API to pass CUDA events across processes. See the example in the test 32.
st178914
I used four GPUs to train a model. My training strategy is divided into two stages. In the first stage, the model is trained normally, and then in the second stage, the model is loaded with the optimal model of the first stage. Continue Training, but at this stage it appeared Cuda out of memory error. This is the error: /root/anaconda3/envs/python367/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown len(cache)) /root/anaconda3/envs/python367/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown len(cache)) /root/anaconda3/envs/python367/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown len(cache)) Traceback (most recent call last): File "dogs_test3.py", line 573, in <module> my_launch(args) File "dogs_test3.py", line 563, in my_launch mp.spawn(train,nprocs=world_size,args=(args,)) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn while not spawn_context.join(): File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join raise Exception(msg) Exception: -- Process 1 terminated with the following error: Traceback (most recent call last): File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap fn(i, *args) File "/root/dogs_test/dogs_test3.py", line 538, in train global_feat, local_feat, cls_score = model(image) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward output = self.module(*inputs[0], **kwargs[0]) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/root/dogs_test/dogs_test3.py", line 213, in forward x = self.backbone(x) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/geffnet/efficientnet_builder.py", line 237, in forward x = self.conv_pwl(x) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "/root/anaconda3/envs/python367/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward self.padding, self.dilation, self.groups) RuntimeError: CUDA out of memory. Tried to allocate 126.00 MiB (GPU 1; 10.76 GiB total capacity; 6.98 GiB already allocated; 129.69 MiB free; 7.17 GiB reserved in total by PyTorch) This is my code: def my_launch(args): world_size=args['num_machines']*args['num_gpus_per_machine'] args['world_size']=world_size os.environ['MASTER_ADDR']='127.0.0.1' os.environ['MASTER_PORT']='27925' mp.spawn(train,nprocs=world_size,args=(args,)) I commented out the code of step1 and loaded the checkpoint directly def train(gpu,args): rank=gpu dist.init_process_group( backend='nccl', init_method='env://', world_size=args['world_size'], rank=rank ) torch.manual_seed(0) torch.cuda.set_device(gpu) train_info, valid_info = stratification_kfold(names, image_label, 5) train_names, valid_names = train_info[0], valid_info[0] train_ds = TrainDataset(train_names, image_label, label_map_image, transform_train) valid_ds = TestDataset(valid_names, image_label, transform_valid) valid_dl = Data.DataLoader(valid_ds, batch_size=8, drop_last=True) train_sampler=Data.distributed.DistributedSampler(train_ds,num_replicas=args['world_size'],rank=0) train_dl = Data.DataLoader(train_ds, batch_size=8, collate_fn=train_collate, shuffle=False,sampler=train_sampler, drop_last=True) step1_epochs = 30 step2_epochs = 30 criterion = Criterion() early_stop = EarlyStopping() model = myNet() model.cuda(gpu) model=nn.parallel.DistributedDataParallel(model,device_ids=[gpu]) dist.barrier() map_loacation={'cuda:%d'%0:'cuda:%d'%gpu} # # step1_optimizer = torch.optim.SGD(model.parameters(), lr=0.9, weight_decay=0.0001) # for epoch in range(step1_epochs): # with tqdm(total=len(train_dl)) as pbar: # train_loss = 0 # steps = len(train_dl) # for image, labels in train_dl: # model.train() # step1_optimizer.zero_grad() # # image = image.cuda(gpu).float() # labels=labels.cuda(gpu) # global_feat, local_feat, cls_score = model(image) # loss = criterion(global_feat, local_feat, cls_score, labels,gpu) # train_loss += loss # loss.backward() # step1_optimizer.step() # pbar.update(1) # print('train_loss:{}'.format(train_loss / steps)) # model.eval() # metric = evaluate(model, valid_dl) # early_stop(metric, model) # if early_stop.early_stop: # break checkpoint_path = '/root/dogs/step2.pt' checkpoint = torch.load(checkpoint_path,map_location=map_loacation) model.load_state_dict(checkpoint['state']) dist.barrier() step2_optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=0.0001) scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(step2_optimizer, T_0=5, T_mult=2) early_stop.counter = 0 early_stop.early_stop = False early_stop.best_score = 0 early_stop.patience = 8 for epoch in range(step2_epochs): with tqdm(total=len(train_dl)) as pbar: train_loss = 0 steps = len(train_dl) for image, labels in train_dl: model.train() step2_optimizer.zero_grad() image = image.cuda(gpu).float() labels=labels.cuda(gpu) global_feat, local_feat, cls_score = model(image) loss = criterion(global_feat, local_feat, cls_score, labels,gpu) train_loss += loss loss.backward() step2_optimizer.step() pbar.update(1) print('train_loss:{}'.format(train_loss / steps)) model.eval() metric = evaluate(model, criterion) scheduler.step() early_stop(metric, model) if early_stop.early_stop: break I saved the checkpoint of the model in early_stop class EarlyStopping: """Early stops the training if validation loss doesn't improve after a given patience.""" def __init__(self, patience=4, best_score=None,delta=0): self.patience = patience self.counter = 0 self.best_score = best_score self.early_stop = False self.delta = delta def __call__(self, val_metric, model): score = val_metric if self.best_score is None: self.best_score = score self.save_checkpoint(val_metric, model) elif score < self.best_score + self.delta: self.counter += 1 print(f'EarlyStopping counter: {self.counter} out of {self.patience}') if self.counter >= self.patience: self.early_stop = True else: self.best_score = score self.save_checkpoint(val_metric, model) self.counter = 0 def save_checkpoint(self, metric, model): state = {'best_metric': metric, 'state': model.state_dict()} torch.save(state, '/root/dogs/step2.pt') Why does cuda out of memory error appear after loading checkpoint?
st178915
RuntimeError: CUDA out of memory. Tried to allocate 126.00 MiB (GPU 1; 10.76 GiB total capacity; 6.98 GiB already allocated; 129.69 MiB free; 7.17 GiB reserved in total by PyTorch) Can you try running torch.cuda.empty_cache() to free up the reserved 7.17GB memory? These reserved memory might be full of small blocks that cannot accommodate the requested 126MB.
st178916
Another thing that could help is, instead of using torch.cuda.set_device(gpu), you can try setting CUDA_VISIBLE_DEVICES, this sometimes can avoid creating unnecessary CUDA context on cuda:0.
st178917
I have two separate models in my algorithm. A large model that resides on CPU and a small model that goes to GPU. I am using the DDP to train the small model on multiple GPUs while the large model remains on CPU. I have 4 GPUs and I observe that the CPU model also loads 4 times and cause OOM in CPU. Is there anyway to keep a sigle CPU model and multiple GPU models in DDP?
st178918
Do you mean you want to share the same CPU model across 4 DDP processes? If so, you can use torch.multiprocessing.Queue 8 to share the model. Does the CPU model participate in training or just inference?
st178919
sorry for late reply, missed your comment. Yes, I want to use DDP but have a single copy in the CPU. my model runs forward and backward on GPU and optimizer on CPU.
st178920
Yes, I want to use DDP but have a single copy in the CPU. In this case, you might need to use multiprocess communication 2 to share tensors. Sorry that I still don’t fully understand the use case. Some pseudo code would be helpful.
st178921
Thank you. I am trying to load the cpu model in one thread and broadcast in gpu. However, I have an issue with the barrier. Here is my code: if args.local_rank in [-1,0]: for net1,net2 in zip(self.encoder.layer[0].named_parameters(), model_cpu.bert.encoder.layer[i].named_parameters()): net1[1].data.copy_(net2[1].data.clone(), non_blocking=True) torch.distributed.barrier() if args.local_rank not in [-1,0]: for name, p in self.encoder.layer[0].named_parameters(): torch.distributed.broadcast(p, src=0) my code stucks at the barrier. Any idea what could be wrong with my code? Should I even use barrier?
st178922
maralm: I am trying to load the cpu model in one thread and broadcast in gpu. The default process group is per-process object, instead of per-thread. Is this just a typo and you actually mean “process”? if args.local_rank not in [-1,0]: for name, p in self.encoder.layer[0].named_parameters(): torch.distributed.broadcast(p, src=0) For collective communications, it requires all ranks to make the same number of c10d API invocations in the same order. It seems that, with the above code, rank 0 is not participating in the broadcast? If you need broadcast in a subgroup, you will need to first create a subgroup using the new_group 1 API and then call boradcast in that group.
st178923
Sorry, yes I mean process. I was thinking to load the model on cpu in one process, transfer it to the gpu 0 (which is in the same process) and from gpu 0, copy weights to other gpus in other processes. Broadcasting is the correct way to do so?
st178924
I was thinking to load the model on cpu in one process, transfer it to the gpu 0 (which is in the same process) and from gpu 0, copy weights to other gpus in other processes. Broadcasting is the correct way to do so? Yes, this looks correct to me. DistributedDataParallel 6 is actually doing the similar thing in its constructor.
st178925
Hi, I try to run example from tutorial with “GLoo” backend and Point to Point communication. """run.py:""" #!/usr/bin/env python import os import torch import torch.distributed as dist from torch.multiprocessing import Process def run(rank, size): tensor = torch.zeros(1) if rank == 0: tensor += 1 # Send the tensor to process 1 dist.send(tensor=tensor, dst=1) else: # Receive tensor from process 0 dist.recv(tensor=tensor, src=0) print('Rank ', rank, ' has data ', tensor[0]) def init_process(rank, size, fn, backend='gloo'): """ Initialize the distributed environment. """ os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '29500' dist.init_process_group(backend, rank=rank, world_size=size) fn(rank, size) if __name__ == "__main__": size = 2 processes = [] for rank in range(size): p = Process(target=init_process, args=(rank, size, run)) p.start() processes.append(p) for p in processes: p.join() print("done") When I run it, only “done” is printed on jupyter notebook. How to run it with python? Thanks,
st178926
I tried this with colab, but cannot reproduce this problem. Sometimes there are weird behavior when using multiprocessing in notebook. If you directly launch this program using command line, are the outputs as expected?
st178927
Yes. It work with python. But I wanna ask about run it on jupyter. How do it work on Jupyter? This is my question. Thanks,
st178928
As I cannot reproduce the error on my Jupyter notebook, I can only guess why the message from subprocess is not shown. Given that the main process prints “done”, I would assume the sub-processes are launched correctly. But since the subprocess didn’t print the message, it could be either 1) sub-process crashed 2) sub-process is not printing to stdout. For 1), you can check the exitcode of the subprocess, adding more logs will also help. For 2) you will need check local configures to see if it is redirected, or you explicitly redirect that print to file.
st178929
Hi all, I find the solution for that. I run jupyter on macbook, and It worked. On Window, the program only printed “done”. Thanks,
st178930
PyTorch distributed package does not support Windows yet. So most likely the subprocess crashed as init_process_group is not available on Windows.
st178931
I want to run a test to see how the synchronization works. I assume that at the end of each batch, DDP would wait for the processes on the world_size GPUs to reach the synchronization point like backward pass to synchronize gradients. If only 2 GPUS processes started, I assume that at the end of first batch, the synchronization on the existing 2 GPUS would time out as the other two never started. What I observed is that the training continued with only 2 GPU processes. How to explain this? Is my understanding not correct?
st178932
Sorry about the delay. I assume that at the end of each batch, DDP would wait for the processes on the world_size GPUs to reach the synchronization point like backward pass to synchronize gradients. If only 2 GPUS processes started, I assume that at the end of first batch, the synchronization on the existing 2 GPUS would time out as the other two never started Yes, this is correct. What I observed is that the training continued with only 2 GPU processes. How to explain this? Is my understanding not correct? It should block on the DDP construct or the backward call. Could you please share a code snippet that reproduces the above behavior?
st178933
I’m training a VAE similar to the implementation in PyTorch’s Github. The main function looks like: if __name__ == "__main__": for epoch in range(1, args.epochs + 1): train(epoch) Assuming that I have another input parameter for training function, pi, I would like to write a code that trains multiple models with different parameter pi. if __name__ == "__main__": for i in range(10): pi = get_param(seed=i) for epoch in range(1, args.epochs + 1): train(epoch, pi) My question is how can I run this in parallel on GPU, such that each core trains a single model.
st178934
If your GPU is already fully utilized, you won’t be able to train models in parallel and the calls will be added to the queue. On the other hand, if you have multiple devices, you could run the training routines on each device and they will be run in parallel.
st178935
How are folks using iterable datasets with DDP? The example for splitting 67 an IterableDataset across workers (or DDP processes) seems a little silly – if I had random access to my dataset (iter_start), I wouldn’t be using an iterable dataset in the first place. Has anyone come across / built a better solution?
st178936
This is a recurring requestm, e.g. here 141 or here 120. Please feel free to suggest a mechanism
st178937
Hi there, I am working on a project called dog_app.py, within conda environment and a Windows 10 machine. Although I have (apparently) configured everything to use GPU, its usage barely goes above 2%. I am moving the model to cuda(), as well as my data. Why GPU is not being used at all? How do I debug that? use_cuda = torch.cuda.is_available() model_scratch = Net() if use_cuda: model_scratch.cuda() print("Let's use", torch.cuda.device_count(), "GPU(s)!") # Prints "Let's use 1 GPU(s)!" ... def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path): ... model.train() for batch_idx, (data, target) in enumerate(loaders['train']): if use_cuda: data, target = data.cuda(), target.cuda() I found this test on another thread on the subject, but allocating memory on the GPU worked just fine: import torch a = torch.cuda.FloatTensor(10000) print("Allocated:", round(torch.cuda.memory_allocated(0)/10243,1), "GB") b = torch.cuda.FloatTensor(20000) print("Allocated:", round(torch.cuda.memory_allocated(0)/10243,1), "GB") #Output: # Allocated: 3.9 GB # Allocated: 11.8 GB image1162×417 19.3 KB
st178938
Hi, If you’re using windows, you need to be careful as CUDA computations are not reported in the task manager. You will have a to check with nvidia-smi from a command line I think.
st178939
Hi @albanD, thanks for your reply. It took me a while to figure out how to use the tool, but it seems I have only short bursts of usage. Is that how it is supposed to work? C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi --format=csv --query-gpu=utilization.gpu,fan.speed,temperature.gpu,power.draw -l 1 utilization.gpu [%], fan.speed [%], temperature.gpu, power.draw [W] 6 %, 0 %, 58, 50.59 W 1 %, 0 %, 59, 135.16 W 1 %, 0 %, 58, 50.50 W 51 %, 0 %, 58, 50.59 W 0 %, 0 %, 58, 144.52 W 0 %, 0 %, 58, 50.15 W 0 %, 0 %, 58, 50.25 W 59 %, 0 %, 59, 50.83 W 0 %, 0 %, 58, 136.92 W 0 %, 0 %, 58, 50.39 W 62 %, 0 %, 59, 50.83 W 0 %, 0 %, 59, 50.39 W 0 %, 0 %, 59, 50.59 W 0 %, 0 %, 61, 62.24 W 0 %, 0 %, 59, 50.49 W 0 %, 0 %, 59, 50.59 W 0 %, 0 %, 60, 50.83 W 0 %, 0 %, 59, 50.49 W 0 %, 0 %, 59, 50.39 W 0 %, 0 %, 60, 50.74 W 0 %, 0 %, 60, 50.83 W 36 %, 0 %, 61, 51.42 W 0 %, 0 %, 60, 50.74 W 0 %, 0 %, 60, 50.74 W
st178940
It will depend a lot on your network. But if it is not too big or your dataloader is not fast enough then yes that is expected. You can try adding workers to the dataloader to make sure this is not the bottleneck. Otherwise increasing the batch size (if you have enough memory) should increase the usage.
st178941
I am using ddp to train my model now. In my model I want to calculate the standard deviation across the batch. However since I am wondering if i can calculate the standard deviation across the entire batch instead of within each device. The standard deviation will be part of my computation graph. I feel that this is similar to synchronize batchnorm and should be doable. How would I go about doing this? Here is an example of what I want to do def forward(self, input): feats = conv(input) batch, channel, height, width = feats.shape stddev = feats.view(batch, -1, 1, channel, height, width) stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) stddev = stddev.repeat(batch, 1, height, width) feats = torch.cat([feats, stddev], 1) output = conv_last(feats) return output Basically when I compute stddev, I want it to do it over entire batch.
st178942
Hey @hij Buffers are broadcast from rank 0 to other processes in the beginning of every forward pass. See the code below: github.com pytorch/pytorch/blob/1f8267931176cc9ecbf00493e5a359a57baca3df/torch/nn/parallel/distributed.py#L509-L513 10 # Synchronize buffers across processes. # The process with rank 0 is considered the authoritative copy. self._distributed_broadcast_coalesced( self.modules_buffers[0], self.broadcast_bucket_size) If this is what you need, you can register the stddev as a buffer. If you need sth different (e.g., square stddev and sum it across all processes over the wire), you can call allreduce 9 or allgather 4 in the forward function to do that.
st178943
If i understand you correctly, registering it as a buffer only allows the stddev of rank 0 to be distributed to other processes. What I want to do is allow all tensors in all processes to contribute to calculating stddev.
st178944
hij: If i understand you correctly, registering it as a buffer only allows the stddev of rank 0 to be distributed to other processes. Yes. What I want to do is allow all tensors in all processes to contribute to calculating stddev. You can do this by using the collective communication APIs (allreduce/allgather) in the forward pass. One caveat here is that, the collective communication API requires all processes in the same group to invoke the same API in the same order, otherwise, it might hang or the result might be wrong. If you are not sure whether the allreduce/allgather for stddev would interleave with other collectives, you can use the new_group 2 API to create a dedicated process group for collecting/summing stddev.
st178945
I have read that all_gather do not retain the gradient information. For my application, I want stddev to be part of the computation graph. How would I go about doing this? And could you also point me to an example/tutorial for these usage. I have not done any distributed training so I am not sure how to use these functions.
st178946
I am trying to understand the use case here: I have read that all_gather do not retain the gradient information. For my application, I want stddev to be part of the computation graph. How would I go about doing this? IIUC, stddev in this case is an intermediate output in forward and it’s not a model parameter? So you need its gradient during the backward pass to compute parameter gradients, but you don’t need to retain its gradient for optimizer step()? If above is true, why existing parameter gradient synchronization in DDP not sufficient? And could you also point me to an example/tutorial for these usage. Sure, below is the tutorial, and please search for “All-Reduce example.” https://pytorch.org/tutorials/intermediate/dist_tuto.html 6
st178947
I might have been confused. So this code snippet is part of my model which I intend to train. However, since I have limited gpu memory, I can only train with batch size of 1. And there isn’t a point to calculate stddev over a batchsize of 1 even if I do DDP. So in this case, would the allgather allreduce work? hij: def forward(self, input): feats = conv(input) batch, channel, height, width = feats.shape stddev = feats.view(batch, -1, 1, channel, height, width) stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) stddev = stddev.repeat(batch, 1, height, width) feats = torch.cat([feats, stddev], 1) output = conv_last(feats) return output
st178948
hij: However, since I have limited gpu memory, I can only train with batch size of 1. And there isn’t a point to calculate stddev over a batchsize of 1 even if I do DDP. So in this case, would the allgather allreduce work? I see. I am not sure if the result would still be correct in this case even if allgather and allreduce can retain gradients. IIUC, if this is trained without DDP (assume there are large enough GPU memory), then both feats and stddev are calculated based on all inputs. When trained with DDP, feats are now only derived from local inputs, and you would like to have stddev to be based on global inputs. So, when you cat feats and stddev, the output of the forward now represents a different thing. I am not sure if the loss function can handle that. Even if it can, what does averaging gradient mean in this case? If above (local feats + global stddev) is the expected behavior, there might be a few ways to implement this. Implement a custom autograd function. E.g., its forward function can use an allgather to collect stddev from all processes, and its backward function can use another allgather to collect gradients and then extract the part belongs to the local stddev and sum them up. Use torch.distributed.rpc 2. There can be a master and a few workers, where each worker calculates the feats and stddev for its own input, and then the master gathers all feats and stddev to compute the final loss. Some more tutorials for this: a. https://pytorch.org/tutorials/intermediate/rpc_tutorial.html 2 b. https://pytorch.org/tutorials/intermediate/rpc_param_server_tutorial.html 1
st178949
In this case, since there is no batch dependencies in feat, would it be different if it is local feat + global stddev vs global feat + global stddev? Since stddev is concatenated to each feat tensor separately, they should have the same effect? I will try your suggestions. Thank you!
st178950
I am having issues getting DistributedDataParallel to perform well (2 GPUs on the same host perform at ~85-90% of linear scaling, and it gets worse as GPUs or hosts are added). From slack, it seems other users are able to get much closer to 99% of linear with small numbers of nodes/GPUs. I’m seeing this 85-90% scaling behavior on the (shared) work cluster, and on a 2 GPU system I have at home. I haven’t tested the full cross product, but I’ve seen the same behavior on Ubuntu 14.04 and 18.04; CUDA 9.1, 10.0, and 10.2; stock PyTorch 1.4 DDP and NVIDIA Apex DDP; resnet 50, 152, and some toy models. All used fake data from torchvision with batch sizes that use up the majority of GPU RAM. The training script is here (with light edits to remove comments, etc.): https://gist.github.com/elistevens/7edacdafdb45747a22da2ef0c6ce1af3 6 OMP_NUM_THREADS=4 EPOCHS=2 EPOCH_SIZE=3840 BATCH_SIZE=64 NODES=1 GPUS=2 ~/v/bin/python min_ddp.py etc. The numbers here are from my 18.04 home system with 2x 1080 Tis. There’s roughly a three-second slowdown for the 2 GPU case, resulting in training going from 22 seconds (1 GPU, 1 epoch) to 25 seconds (2 GPUs, 2 epochs). About a second and a half of that is the {method 'acquire' of '_thread.lock' objects} and the rest seems to be mul_, add_ etc. methods of torch._C._TensorBase objects. Is this expected? Am I missing something that would cause performance to be poor like this? Thanks for any help. More detailed data is below. 1 GPU 308413 function calls (297131 primitive calls) in 22.053 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 60 5.305 0.088 5.305 0.088 {method 'run_backward' of 'torch._C._EngineBase' objects} 19320 3.509 0.000 3.509 0.000 {method 'mul_' of 'torch._C._TensorBase' objects} 19320 3.372 0.000 3.372 0.000 {method 'add_' of 'torch._C._TensorBase' objects} 9660 2.298 0.000 2.298 0.000 {method 'addcdiv_' of 'torch._C._TensorBase' objects} 9660 2.124 0.000 2.124 0.000 {method 'sqrt' of 'torch._C._TensorBase' objects} 60 1.741 0.029 14.598 0.243 /home/elis/v/lib/python3.6/site-packages/torch/optim/adam.py:49(step) 9660 1.499 0.000 1.499 0.000 {method 'addcmul_' of 'torch._C._TensorBase' objects} 224 0.671 0.003 0.671 0.003 {method 'acquire' of '_thread.lock' objects} 120 0.548 0.005 0.548 0.005 {method 'to' of 'torch._C._TensorBase' objects} 3180 0.141 0.000 0.141 0.000 {built-in method conv2d} ... 2 GPUs 312342 function calls (301058 primitive calls) in 25.171 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 60 5.355 0.089 5.355 0.089 {method 'run_backward' of 'torch._C._EngineBase' objects} 19320 4.015 0.000 4.015 0.000 {method 'mul_' of 'torch._C._TensorBase' objects} 19320 3.668 0.000 3.668 0.000 {method 'add_' of 'torch._C._TensorBase' objects} 9660 2.407 0.000 2.407 0.000 {method 'sqrt' of 'torch._C._TensorBase' objects} 9660 2.339 0.000 2.339 0.000 {method 'addcdiv_' of 'torch._C._TensorBase' objects} 264 2.089 0.008 2.089 0.008 {method 'acquire' of '_thread.lock' objects} 60 1.800 0.030 15.833 0.264 /home/elis/v/lib/python3.6/site-packages/torch/optim/adam.py:49(step) 9660 1.566 0.000 1.566 0.000 {method 'addcmul_' of 'torch._C._TensorBase' objects} 120 0.561 0.005 0.561 0.005 {method 'to' of 'torch._C._TensorBase' objects} 105 0.275 0.003 0.275 0.003 {built-in method posix.waitpid} 3180 0.252 0.000 0.252 0.000 {built-in method conv2d} ... g2-g1 function delta 1.418, {method 'acquire' of '_thread.lock' objects} 0.506, {method 'mul_' of 'torch._C._TensorBase' objects} 0.296, {method 'add_' of 'torch._C._TensorBase' objects} 0.283, {method 'sqrt' of 'torch._C._TensorBase' objects} 0.184, {built-in method posix.waitpid} 0.111, {built-in method conv2d} 0.067, {method 'addcmul_' of 'torch._C._TensorBase' objects} 0.059, /home/elis/v/lib/python3.6/site-packages/torch/optim/adam.py:49(step) 0.05, {method 'run_backward' of 'torch._C._EngineBase' objects} 0.049, {built-in method _posixsubprocess.fork_exec} 0.041, {method 'addcdiv_' of 'torch._C._TensorBase' objects} 0.037, {built-in method relu_} 0.023, {built-in method batch_norm} 0.015, {built-in method max_pool2d} 0.013, {method 'to' of 'torch._C._TensorBase' objects} 0.008, {built-in method torch.distributed._broadcast_coalesced}
st178951
Hey @elistevens Looks like you already using DDP with one device per processes, which is the recommended setup. Can you try different OMP_NUM_THREADS configurations? Does it speed up or slow down if you set OMP_NUM_THREADS to 1? Sometimes DataLoader can also cause slowdowns. If does it affect the performance if you get rid of the DataLoader by using synthetic generated input/output (just for testing purpose)?
st178952
Yes, I’m using one process per GPU. OMP_NUM_THREADS at 4 doesn’t have much of a difference from 1; leaving it unset has a very slight performance regression. I am already using synthetic data using the torchvision.datasets.FakeData; each data loader process takes up about 15% CPU with 4 procs, and 60% CPU with one process. The overall scaling jumps to 92% of linear using 1 worker process, but drops to ~65% with num_workers set to zero (so all of the data stuff happens in the main process). If I get rid of the DataLoader entirely, and just do: x = torch.rand((batch_size, 3, 224, 224), device='cuda:' + str(gpu_ndx)) y = torch.randint(0, 100, size=(batch_size,), dtype=torch.long, device='cuda:' + str(gpu_ndx)) Inside the training loop, then I see a 6% performance improvement in the single-GPU case, and the two-GPU case jumps to 94% of linear (based off of the improved single GPU perf). The primary causes of slowdown are now basic math methods of torch._C._TensorBase: 0.498, {method 'mul_' of 'torch._C._TensorBase' objects} 0.254, {method 'add_' of 'torch._C._TensorBase' objects} 0.176, {method 'sqrt' of 'torch._C._TensorBase' objects} 0.153, {method 'addcdiv_' of 'torch._C._TensorBase' objects} 0.07, {method 'run_backward' of 'torch._C._EngineBase' objects} 0.064, /home/elis/v/lib/python3.6/site-packages/torch/optim/adam.py:49(step) 0.03, {built-in method conv2d} 0.022, {method 'addcmul_' of 'torch._C._TensorBase' objects} 0.012, {built-in method batch_norm} 0.011, {built-in method zeros_like} The 94% of linear scaling remains the case even if I move the creation of x and y outside the training loop. Switching from Adam to SGD speeds things up marginally, but doesn’t change the ratio.
st178953
I tried to remove all the dataloader overhead and profiling overhead and see about 98% scaling: $ OMP_NUM_THREADS=1 EPOCHS=1 EPOCH_SIZE=3840 BATCH_SIZE=64 NODES=1 GPUS=1 python /tmp/min_ddp.py 2020-04-21 14:51:31.024092 torch.cuda.set_device(0); torch.distributed.init_process_group('nccl', rank=0, world_size=1) 2020-04-21 14:51:33.167023 Epoch 1, dl: 60 2020-04-21 14:52:08.255089 training loop time: 35.08807826042175 seconds $ OMP_NUM_THREADS=1 EPOCHS=2 EPOCH_SIZE=3840 BATCH_SIZE=64 NODES=1 GPUS=2 python /tmp/min_ddp.py 2020-04-21 14:52:15.271820 torch.cuda.set_device(1); torch.distributed.init_process_group('nccl', rank=1, world_size=2) 2020-04-21 14:52:15.278892 torch.cuda.set_device(0); torch.distributed.init_process_group('nccl', rank=0, world_size=2) 2020-04-21 14:52:18.939304 Epoch 1, dl: 30 2020-04-21 14:52:18.939501 Epoch 1, dl: 30 2020-04-21 14:52:37.220102 Epoch 2, dl: 30 2020-04-21 14:52:37.220168 Epoch 2, dl: 30 2020-04-21 14:52:54.701512 training loop time: 35.76222109794617 seconds Code changes: import datetime import math import os import time import torch import torch.distributed import torch.multiprocessing from torch import nn from torch.nn import functional as F from torch.utils.data import DataLoader from torch.nn.parallel import DataParallel from torch.nn.parallel import DistributedDataParallel #from apex.parallel import DistributedDataParallel import torchvision num_nodes = int(os.environ['NODES']) num_gpus = int(os.environ['GPUS']) def main(ddp_wrapper=None, sampler_cls=None, gpu_ndx=0): epoch_size = int(os.environ['EPOCH_SIZE']) ds = torchvision.datasets.FakeData( epoch_size, num_classes=100, transform=torchvision.transforms.ToTensor(), ) dl = DataLoader( ds, batch_size=int(os.environ['BATCH_SIZE']), num_workers=4, pin_memory=True, sampler=sampler_cls(ds) if sampler_cls else None, ) model = torchvision.models.resnet50() model = model.to('cuda') if ddp_wrapper: model = ddp_wrapper(model) optimizer = torch.optim.Adam(model.parameters(), lr=0.01) ''' import cProfile, pstats, io pr = cProfile.Profile() pr.enable() ''' batch_size = int(os.environ['BATCH_SIZE']) x = torch.rand((batch_size, 3, 224, 224), device='cuda:' + str(gpu_ndx)) y = torch.randint(0, 100, size=(batch_size,), dtype=torch.long, device='cuda:' + str(gpu_ndx)) start_ts = time.time() for epoch_ndx in range(1, int(os.environ['EPOCHS']) + 1): iters = int(epoch_size/batch_size/num_gpus) print(datetime.datetime.now(), f"Epoch {epoch_ndx}, dl: {iters}") for i in range(iters): optimizer.zero_grad() #x, y = batch_tup x = x.to('cuda') y = y.to('cuda') y_hat = model(x) loss_var = F.cross_entropy(y_hat, y) loss_var.backward() optimizer.step() end_ts = time.time() #pr.disable() if gpu_ndx == 0: ''' pr.dump_stats('/tmp/min_profile.out') # pstats.Stats(pr).sort_stats('cumulative').print_stats() pstats.Stats(pr).sort_stats('tot').print_stats() ''' print(datetime.datetime.now(), f"training loop time: {end_ts - start_ts} seconds") ''' print('\n'.join( ['min ddp', 'cluster'] + [os.environ[x] for x in ['NODES', 'GPUS', 'BATCH_SIZE', 'EPOCH_SIZE', 'EPOCHS', 'OMP_NUM_THREADS']] + [f'{end_ts - start_ts}'] + [f"{int(os.environ['EPOCH_SIZE']) * int(os.environ['EPOCHS']) / (end_ts - start_ts) / int(os.environ['GPUS'])}"] + [f"{int(os.environ['EPOCH_SIZE']) * int(os.environ['EPOCHS']) / (end_ts - start_ts) / int(os.environ['GPUS']) / 1.737005}"] )) ''' def ddp_spawn(gpu_ndx): node_rank = 0 rank = num_gpus * node_rank + gpu_ndx world_size = num_nodes * num_gpus print(datetime.datetime.now(), f"torch.cuda.set_device({gpu_ndx}); torch.distributed.init_process_group('nccl', rank={rank}, world_size={world_size})") torch.cuda.set_device(gpu_ndx) torch.distributed.init_process_group('nccl', rank=rank, world_size=world_size) main( ddp_wrapper=lambda m: DistributedDataParallel(m, [gpu_ndx]), sampler_cls=torch.utils.data.distributed.DistributedSampler, gpu_ndx=gpu_ndx, ) if __name__ == '__main__': os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '1234' torch.multiprocessing.spawn(ddp_spawn, nprocs=num_gpus, args=())
st178954
That’s odd; I see about 95% using the code you posted. $ OMP_NUM_THREADS=1 NUM_WORKERS=1 EPOCHS=2 EPOCH_SIZE=3840 BATCH_SIZE=64 NODES=1 GPUS=2 ~/v/bin/python forum_min_ddp.py 2020-04-21 15:18:35.059667 torch.cuda.set_device(1); torch.distributed.init_process_group('nccl', rank=1, world_size=2) 2020-04-21 15:18:35.059667 torch.cuda.set_device(0); torch.distributed.init_process_group('nccl', rank=0, world_size=2) 2020-04-21 15:18:36.918569 Epoch 1, dl: 30 2020-04-21 15:18:36.918801 Epoch 1, dl: 30 2020-04-21 15:18:48.006334 Epoch 2, dl: 30 2020-04-21 15:18:48.007357 Epoch 2, dl: 30 2020-04-21 15:18:59.082558 training loop time: 22.16376233100891 seconds $ OMP_NUM_THREADS=1 NUM_WORKERS=1 EPOCHS=1 EPOCH_SIZE=3840 BATCH_SIZE=64 NODES=1 GPUS=1 ~/v/bin/python forum_min_ddp.py 2020-04-21 15:19:09.327511 torch.cuda.set_device(0); torch.distributed.init_process_group('nccl', rank=0, world_size=1) 2020-04-21 15:19:11.018629 Epoch 1, dl: 60 2020-04-21 15:19:31.962914 training loop time: 20.944292783737183 seconds Could you post your output from nvidia-smi -q?
st178955
elistevens: That’s odd; I see about 95% using the code you posted. I should’ve mentioned I was running on master and not 1.4, although I’m not sure if that matters. elistevens: Could you post your output from nvidia-smi -q ? nvidia-smi -q seems to include some sensitive information like serial numbers and UUID, was there something specific you’d like to know about my setup? Happy to share that information.
st178956
Ahh sorry, I didn’t realize there was potentially sensitive info in there. I was mostly going to visually diff it with what I have here and see if anything jumped out at me. For example, here is what I see when I’m actually running training: https://gist.github.com/elistevens/dbe5564873a1f55c4ac98594cfd31c63 This is my home system; it’s two 1080 Tis running on PCIe 3.0 8x slots (it’s an older consumer motherboard).
st178957
Hi Eli, TL;DR: I suspect the main reason for the disparity you’re observing is that each epoch, the dataloader processes are shutdown and recreated, and that is not free. I was able to reproduce your issue with your training script, although I had about 95% scaling from the start on my machine (Ubuntu 18.04.2, 28-core Intel Core i9-9940X CPU @ 3.30GHz, 2x Quadro RTX 5000, PyTorch 1.3.1, CUDA 10.0, NCCL 2.4.8) with the parameters OMP_NUM_THREADS=4 EPOCHS=2 EPOCH_SIZE=3840 BATCH_SIZE=64 NODES=1 GPUS=2 vs. OMP_NUM_THREADS=4 EPOCHS=1 EPOCH_SIZE=3840 BATCH_SIZE=64 NODES=1 GPUS=1 Looking at the GPU utilization with nvtop, I noticed a dip in GPU usage between the epochs with GPUS=2. I knew that dataloader processes are destroyed and then recreated from scratch every epoch, so GPUS=1 EPOCHS=1 version would only do it once, while GPUS=2 EPOCHS=2 would have to do it twice. So I decided to remove this unfairness: I made the script to always do just 1 epoch, and instead scale the dataset size like: ds = torchvision.datasets.FakeData( int(os.environ['EPOCH_SIZE']) * int(os.environ['EPOCHS']), This gave me 99% scaling, in fact even more when I set EPOCH_SIZE=38400 (10x). And, the invocations counts became equal between GPUS=1 and 2 for the top 20 functions from your pstats output. (Except the {method 'acquire' of '_thread.lock' objects} which has 4 extra invocations in GPUS=2 case. That one is coming from pin_memory thread, if you set pin_memory=False, all those acquires go away, but training, as expected, gets slower in both cases, although GPUS=1 case suffers more than GPUS=2). BTW there is a way to not recreate dataloader processes 10 each epoch and just loop over and over. Hope this helps and you can replicate! my pstats for GPUS=1 ncalls tottime percall cumtime percall filename:lineno(function) 600 50.489 0.084 50.489 0.084 {method 'run_backward' of 'torch._C._EngineBase' objects} 193200 37.203 0.000 37.203 0.000 {method 'mul_' of 'torch._C._TensorBase' objects} 193200 32.657 0.000 32.657 0.000 {method 'add_' of 'torch._C._TensorBase' objects} 96600 20.367 0.000 20.367 0.000 {method 'sqrt' of 'torch._C._TensorBase' objects} 96600 19.994 0.000 19.994 0.000 {method 'addcdiv_' of 'torch._C._TensorBase' objects} 600 15.513 0.026 140.393 0.234 /opt/conda/lib/python3.6/site-packages/torch/optim/adam.py:49(step) 96600 14.561 0.000 14.561 0.000 {method 'addcmul_' of 'torch._C._TensorBase' objects} 1200 3.360 0.003 3.360 0.003 {method 'to' of 'torch._C._TensorBase' objects} 31800 1.367 0.000 1.367 0.000 {built-in method conv2d} 600 1.046 0.002 1.046 0.002 {built-in method torch.distributed._broadcast_coalesced} 31800 0.919 0.000 0.919 0.000 {built-in method batch_norm} 31800 0.704 0.000 1.962 0.000 /opt/conda/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py:58(forward) 1840 0.496 0.000 0.496 0.000 {method 'acquire' of '_thread.lock' objects} 96439 0.359 0.000 0.359 0.000 {method 'zero_' of 'torch._C._TensorBase' objects} 29400 0.307 0.000 0.307 0.000 {built-in method relu_} 110400/600 0.254 0.000 5.682 0.009 /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py:531(__call__) 9600 0.223 0.000 4.182 0.000 /opt/conda/lib/python3.6/site-packages/torchvision/models/resnet.py:95(forward) 5 0.219 0.044 0.219 0.044 {built-in method _posixsubprocess.fork_exec} 353400 0.175 0.000 0.175 0.000 /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py:571(__getattr__) 53 0.116 0.002 0.116 0.002 {built-in method posix.waitpid} 31800 0.096 0.000 1.057 0.000 /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1643(batch_norm) 600 0.087 0.000 0.485 0.001 /opt/conda/lib/python3.6/site-packages/torch/optim/optimizer.py:159(zero_grad) my pstats for GPUS=2 ncalls tottime percall cumtime percall filename:lineno(function) 600 50.391 0.084 50.391 0.084 {method 'run_backward' of 'torch._C._EngineBase' objects} 193200 37.327 0.000 37.327 0.000 {method 'mul_' of 'torch._C._TensorBase' objects} 193200 32.815 0.000 32.815 0.000 {method 'add_' of 'torch._C._TensorBase' objects} 96600 20.583 0.000 20.583 0.000 {method 'sqrt' of 'torch._C._TensorBase' objects} 96600 20.394 0.000 20.394 0.000 {method 'addcdiv_' of 'torch._C._TensorBase' objects} 600 15.658 0.026 141.634 0.236 /opt/conda/lib/python3.6/site-packages/torch/optim/adam.py:49(step) 96600 14.755 0.000 14.755 0.000 {method 'addcmul_' of 'torch._C._TensorBase' objects} 1200 3.549 0.003 3.549 0.003 {method 'to' of 'torch._C._TensorBase' objects} 31800 1.394 0.000 1.394 0.000 {built-in method conv2d} 600 1.104 0.002 1.104 0.002 {built-in method torch.distributed._broadcast_coalesced} 31800 0.924 0.000 0.924 0.000 {built-in method batch_norm} 31800 0.713 0.000 1.975 0.000 /opt/conda/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py:58(forward) 1844 0.550 0.000 0.550 0.000 {method 'acquire' of '_thread.lock' objects} 96439 0.369 0.000 0.369 0.000 {method 'zero_' of 'torch._C._TensorBase' objects} 29400 0.316 0.000 0.316 0.000 {built-in method relu_} 110400/600 0.261 0.000 5.870 0.010 /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py:531(__call__) 9600 0.242 0.000 4.253 0.000 /opt/conda/lib/python3.6/site-packages/torchvision/models/resnet.py:95(forward) 5 0.215 0.043 0.215 0.043 {built-in method _posixsubprocess.fork_exec} 353400 0.174 0.000 0.174 0.000 /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py:571(__getattr__) 53 0.152 0.003 0.152 0.003 {built-in method posix.waitpid} 600 0.118 0.000 0.118 0.000 {built-in method addmm} 31800 0.098 0.000 1.063 0.000 /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1643(batch_norm) 600 0.090 0.000 0.499 0.001 /opt/conda/lib/python3.6/site-packages/torch/optim/optimizer.py:159(zero_grad)
st178958
That’s a good point; I hadn’t considered the disparity introduced by having a different number of epochs. While fixing up my testing script, I stumbled across what I think is a key culprit: thermal throttling. My home setup has the two GPUs in adjacent slots, and what I think is happening is that the airflow into the top GPU is being warmed by the backplate of the bottom GPU, because if I heat the GPUs up with a job the top hits 90C and the pclck from nvidia-smi dmon drops, but it drops more with a 2 GPU job. The hint that clued me in was the 2-GPU times getting worse as I increased the epoch size, rather than better. While I had tested on work systems, those earlier tests might have suffered from issues with epoch counts, etc. I’m going to rerun those tests with my updated testing script on work systems and see what the results are. I’ll report back when I have them (probably tomorrow). Thank you to everyone who took the time to read, comment, and/or run my testing script.
st178959
Short follow up: with the suggested changes, I was able to get scaling at 98.5% of linear with 2 GPUs on the work cluster. Thanks again!
st178960
HI, I follow instructions from pytorch git as below. install dependencies conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi (cmake error: conda install -c anaconda cmake) clone pytorch git clone --recursive https://github.com/pytorch/pytorch cd pytorch export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/…/"} python setup.py install The error is as below. MakeFiles/c10.dir/util/numa.cpp.o -c …/c10/util/numa.cpp …/c10/util/numa.cpp:6:10: fatal error: numa.h: No such file or directory #include <numa.h> ^~~~~~~~ compilation terminated. [1684/4092] Building CXX object third_p…akeFiles/dnnl_cpu.dir/cpu_reorder.cpp.o ninja: build stopped: subcommand failed. Traceback (most recent call last): File “setup.py”, line 740, in build_deps() File “setup.py”, line 323, in build_deps cmake=cmake) File “/cluster/home/cnphuong/pytorch/tools/build_pytorch_libs.py”, line 62, in build_caffe2 cmake.build(my_env) File “/cluster/home/cnphuong/pytorch/tools/setup_helpers/cmake.py”, line 340, in build self.run(build_args, my_env) File “/cluster/home/cnphuong/pytorch/tools/setup_helpers/cmake.py”, line 141, in run check_call(command, cwd=self.build_dir, env=env) File “/cluster/home/cnphuong/.conda/envs/Pytorch_ENV/lib/python3.6/subprocess.py”, line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command ‘[‘cmake’, ‘–build’, ‘.’, ‘–target’, ‘install’, ‘–config’, ‘Release’, ‘–’, ‘-j’, ‘64’]’ returned non-zero exit status 1. When run setup.py I saw some output here 2.
st178961
Could you please create an issue on github to track this? And this does not seem to be relevant to torch.diistributed?
st178962
hi, Because Distributed CPUs is only supported by building from source. I think most of people with this tag have more experiments than others. Thanks,
st178963
Hey @ph0123, DistributedDataParallel with CPU model should be supported by default in the release binaries. You can enable this mode by passing in a CPU model and do not provide a device_ids argument. If this is all you need, you don’t need to compile from source I think? Did you hit any error when trying to run DDP with CPU models?
st178964
mrshenli: tedDataParallel with CPU model should be supported by default in the release binaries. You can enable this mode by passing in a CPU model and do not provide a device_ids argument. If this is all you need, you don’t need to compile from source I think? Did you hit any error when trying to run DDP with CPU models Dear mrshenli, Thanks, last time, I install from sources, and some errors when run the program with Distributed Parallel. I check the tutorial, and I have to install from source to run Distributed CPUs. But now I read the documents again, it do not need install from sources. I will try. Thank you so much! Thanks,
st178965
Hi, Please see **MPI Backend** The Message Passing Interface (MPI) is a standardized tool from the field of high-performance computing. It allows to do point-to-point and collective communications and was the main inspiration for the API of `torch.distributed` . Several implementations of MPI exist (e.g. [Open-MPI](https://www.open-mpi.org/), [MVAPICH2](http://mvapich.cse.ohio-state.edu/), [Intel MPI](https://software.intel.com/en-us/intel-mpi-library)) each optimized for different purposes. The advantage of using the MPI backend lies in MPI’s wide availability - and high-level of optimization - on large computer clusters. [Some](https://developer.nvidia.com/mvapich) [recent](https://developer.nvidia.com/ibm-spectrum-mpi) [implementations](https://www.open-mpi.org/) are also able to take advantage of CUDA IPC and GPU Direct technologies in order to avoid memory copies through the CPU. Unfortunately, PyTorch’s binaries can not include an MPI implementation and we’ll have to recompile it by hand. Fortunately, this process is fairly simple given that upon compilation, PyTorch will look *by itself* for an available MPI implementation. The following steps install the MPI backend, by installing PyTorch [from source](https://github.com/pytorch/pytorch#from-source). 1. Create and activate your Anaconda environment, install all the pre-requisites following [the guide](https://github.com/pytorch/pytorch#from-source), but do **not** run `python setup.py install` yet. 2. Choose and install your favorite MPI implementation. Note that enabling CUDA-aware MPI might require some additional steps. In our case, we’ll stick to Open-MPI *without* GPU support: `conda install -c conda-forge openmpi` 3. Now, go to your cloned PyTorch repo and execute `python setup.py install` . In order to test our newly installed backend, a few modifications are required. 1. Replace the content under `if __name__ == '__main__':` with `init_process(0, 0, run, backend='mpi')` . 2. Run `mpirun -n 4 python myscript.py` . The reason for these changes is that MPI needs to create its own environment before spawning the processes. MPI will also spawn its own processes and perform the handshake described in [Initialization Methods](https://pytorch.org/tutorials/intermediate/dist_tuto.html#initialization-methods), making the `rank` and `size` arguments of `init_process_group` superfluous. This is actually quite powerful as you can pass additional arguments to `mpirun` in order to tailor computational resources for each process. (Things like number of cores per process, hand-assigning machines to specific ranks, and [some more](https://www.open-mpi.org/faq/?category=running#mpirun-hostfile)) Doing so, you should obtain the same familiar output as with the other communication backends. Thanks,
st178966
Oh I see, you are trying to use MPI. Is MPI the only option, or will Gloo or NCCL also be acceptable? And yes, MPI backend needs building from source.
st178967
BTW, the build log shown here 1 does not seem to be complete. Could you please also paste the last few screens of logs?