id
stringlengths
3
8
text
stringlengths
1
115k
st179268
@mrshenli Thanks a lot for finding the machine configs. I will see if I can run in modern hardware. For K80s that was the best I was able to produce. I did some micro-benchmarks as well to understand some of the bottlenecks. But for my experiments I got the optimum results close to end of the curve (end of the curve meaning the graphs +infinity direction). But I didn’t get a significant speed up from the code. That’s why I was trying different approaches.
st179269
I’m currently implementing a heterogeneity-aware distributed model. My basic idea is to do all-reduce on a subset of the fast workers in the world group, however I noticed that in torch.distributed. new_group, it says: This function requires that all processes in the main group (i.e. all processes that are part of the distributed job) enter this function, even if they are not going to be members of the group. Does that mean that the fast workers should wait for the slow workers before they can move forward (i.e. the slower one will sync the faster one)? If so, is there any other way to implement the model?
st179270
Does that mean that the fast workers should wait for the slow workers before they can move forward (i.e. the slower one will sync the faster one)? No. It only requires all processes to call that function for rendezvous. After that, collective communications (e.g., allreduce) within a subgroup do not require non-member processes to join. So different subgroups can do allreduce independently. For implementation, you could create multiple DDP gangs, on the same model, with each gang spans different processes. But then, you will need to coordinate the communication because different DDP gangs will read from and write to the same set of param.grad field. Application needs to avoid the race there.
st179271
I am trying to replicate the model parallel best practices tutorial. Model Parallel Pytorch Docs 7 I use Tesla K80 GPUs for running the example. I didn’t plot graphs but I have the following stats. Single Node Time: 2.1659805027768018 Model Parallel Time: 2.23040875303559 Pipeline 20 Mean: 3.496733816713095 I don’t get the best results at this split size and it could be okay, depending on the hardware, software issues this can be possible. So I went for testing what is going on. Then I ran the rest of the tutorial for different split sizes. These are the results I obtain for the given split sizes. Here is the graph I obtained, Stats for corresponding split sizes [11.667116858577355, 15.080700974399225, 3.556491438532248, 6.485900523653254, 2.4750063681043684, 4.956193731771782, 3.506740797869861, 1.6466765413060784, 1.5998394633177668] I don’t get a similar graph. Instead of having a local minima in the middle, I get the minimum value for split size 60. Then I investigated bit deeper by just logging the times for Forward prop, backward prop, label copy time and optimization time. I get something like this, MBi refers to i^th mini-batch FW : forward time BW: backward time LBL_CP: label copy time OPT : optimization time For split size 20 MB-1: FW 0.12454676628112793, LBL_CP 0.5665407180786133, BW 0.25083422660827637, OPT 0.015613555908203125 MB-2: FW 0.31687474250793457, LBL_CP 0.5684511661529541, BW 0.26471543312072754, OPT 0.017733335494995117 MB-3: FW 0.3080329895019531, LBL_CP 0.571399450302124, BW 0.2626023292541504, OPT 0.018143177032470703 Split Size 1 MB1: FW 2.2466013431549072, LBL_CP 0.003688812255859375, BW 1.7002854347229004, OPT 0.0038182735443115234 MB2: FW 2.2562222480773926, LBL_CP 0.00019812583923339844, BW 1.6973598003387451, OPT 0.0039861202239990234 MB3: FW 2.2152814865112305, LBL_CP 0.0023992061614990234, BW 1.6881706714630127, OPT 0.004811525344848633 Split Size 3 MB1: FW 3.195209264755249, LBL_CP 0.7909142971038818, BW 0.9772884845733643, OPT 0.0038728713989257812 MB1: FW 3.122593402862549, LBL_CP 0.7815954685211182, BW 0.960608959197998, OPT 0.0037987232208251953 MB1: FW 3.2085180282592773, LBL_CP 0.7906265258789062, BW 0.9696476459503174, OPT 0.003855466842651367 Split Size 5 MB1: FW 0.5092735290527344, LBL_CP 0.003528594970703125, BW 0.6527245044708252, OPT 0.005049228668212891 MB2: FW 0.44788599014282227, LBL_CP 0.0061757564544677734, BW 0.6450486183166504, OPT 0.003782510757446289 MB3: FW 0.514885425567627, LBL_CP 0.003251314163208008, BW 0.6562778949737549, OPT 0.004816293716430664 The label copy time fluctuates a lot, but It always copy the same size of array chunk, isn’ it? In addition the FW time also fluctuates in an unexpected way. I am trying to profile this and see what happens. But, I would like to get an insight, why this could be happening? (Does this has something to do with NVLink?)
st179272
I am facing a similar issue while trying to replicate the tutorial. In my case the pipelining time is higher than the model-parallel time which is not expected.
st179273
I also got the same problem. I got a similar graph like this. So I further continued. It is not clear why it is happening? I am currently micro-benchmarking this. I still have no clear idea.
st179274
Hey @Vibhatha_Abeykoon, how did you log the time for fw, bw and opt? Do you use CUDA events and then call elapsed_time?
st179275
Hello, In my implementation, I train the model by multiple processes and save its state dictionary by a concurrent process to evaluate/test it after the training is complete. During evaluation, I load the saved state dictionaries. I need to do this in order to compute some more info from my distributed/concurrent training algorithm. It also helps me in un-engaging the processors from test-computation-load while training. This implementation works perfectly for all the models that I worked with when it is done on a CPU. On a GPU, when the number of training processes is one, again it is fine for each of those models. Furthermore, it also works fine if there is no BatchNorm in the model and I train it using multiple processes over a GPU. However, with BatchNorm and multiple processes training on a GPU, when I do a forward pass later during evaluation it hangs at Conv2d (it might be at some other place further inside, however, I could pin in my debugger up to Conv2d). What does exactly happen with the BatchNorm+multiprocessing combination?
st179276
Hi, I want to split my input and weight tensors to x number of torch.chunks, and then be able to evaluate the chunk-wise nn.functional.linear or nn.functional.conv_2d in parallel without having to use a for loop. Can I know if this possible in pytorch?
st179277
Hey @Sai_Kiran, are you referring to Mesh-TensorFlow-like parallelism? We currently don’t have the API for it yet. But the split, scatter, parallel_apply, and gather can be done in the application side. With parallel_apply, there won’t be a loop in the application code, but it internally uses a loop to launch multiple threads to process inputs in parallel. Here 2 is an example usage of parallel_apply.
st179278
Did you encounter any issue when installing that library? DeepSpeed authors would know better about the process. Will you consider routing this question to the DeepSpeed repo 37
st179279
I am training a model that does not make full use of the GPU’s compute and memory. Training is carried out over two 2080ti GPUs using Distributed DataParallel. How can we concurrently train 2 models per GPU (each using different parameters), so that we can more fully utilize the GPs? The following code currently trains only 1 model across 2 GPUs. import torch.multiprocessing import torch.distributed import torch.nn as nn def train(gpu, args): distributed.init_process_group( backend='nccl', init_method='env://', world_size=args['world_size'], rank=args['nr']*args['gpu']+gpu ... torch.cuda.set_device(gpu) model.cuda(gpu) model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu]) # training loop for epoch in range(num_epochs): ... if __name__ == '__main__': multiprocessing.spawn(train, nprocs=2, args=(args,))
st179280
One possibility is use the new_group 24 API in torch.distributed to create a different process group for two different models, Create different DistributedDataParallel instances, one for each wrapper and pass the process group object explicitly to DistributedDataParallel constructor (process_group arg 9) instead of using the default one. In this way, DistributedDataParallel’s allreduce operations will not collide.
st179281
Hi, I’m new to distributed computation on PyTorch. I’m interested in perform a network partitioning so one piece of the network will run on the machine A and the other piece of the network will run on the machine B. The first thing I need to do is to send tensors from machine A to machine B. So I thought about use the point-to-point communication as in Writing Distributed Applications with PyTorch 2. I’m trying to adapt this code to send messages between the machines A and B but I have not been well succeed. Can anyone explain the whole pipeline for this? Any help would be appreciated!
st179282
If you want to use model sharding, this simple example 21 might be useful. The linked tutorial explains a distributed setup, so let me know, if I misunderstood your use case.
st179283
First of all, thanks for your attention. It is not exactly what I need but it helped me in another point. So thanks again. My issue is related to edge computing. Basically I need to run just a couple of layers in a Drone and the other layers will run in a machine equipped with a GPU. So i thought that i could send messages from Drone to my machine. It is possible to be made with PyTorch?
st179284
That’s a really interesting use case, but I’m not really sure, how well this would work. You could most likely connect the drone and your workstation to the same network and use DDP indeed. However, have you thought about the latency this would create? How long can you wait for the response?
st179285
Yes! You are right. Actually, I am interested on measuring this kind of problems because my research is concerned on 5G systems.
st179286
I think what you may be looking for is our Distributed RPC framework (https://pytorch.org/tutorials/intermediate/rpc_tutorial.html?highlight=rpc 6), which allows you to send messages and tensors between workers. Also see the Distributed Autograd Framework (https://pytorch.org/docs/master/rpc.html#distributed-autograd-framework 2) for training models that are partitioned across machines. Lastly, here is an example of training an RNN using RPC/Distributed Autograd: https://github.com/pytorch/examples/tree/master/distributed/rpc/rnn 2.
st179287
Thanks @osalpekar! It helped me a lot. Actually, I want to thank you both for your attention.
st179288
I am using distributeddataparallel for multi-GPU training using pytorch, and a process uses one GPU. When I follow the normal formular in pytorch, the inference time is OK, like this. x,y = next(train_loader) x = x.cuda(rank) y = y.cuda(rank) t0 = time.time() y1 = model(x) torch.cuda.synchronize() inference_time = time.time()-t0 But when I get the data from another thread, which always read data from train_loader and input it to a queue. The code is as following. args.data_queue=queue.Queue() def load_data_queue(rank, dataloader, args): n = 0 while True: try: x,y = next(dataloader) x = x.cuda(rank) y = y.cuda(rank) args.data_queue.put([feature, label]) except StopIteration: print('load queue quits normally') return ... t = threading.Thread(target=load_data_queue, args=( rank, train_loader, args), daemon=True) t.start() ... x,y = args.data_queue.get() t0 = time.time() y1 = model(x) #torch.cuda.synchronize() inference_time = time.time()-t0 The inference_time will increase a lot. To my understanding, GPU i/o should not influence GPU compute. What is causing this phenomenon?
st179289
You might run into the GIL in Python. Wouldn’t the first DDP approach work or what are the shortcomings you are facing that need the second approach?
st179290
Thank you for your reply. We are doing some research which acquires GPU i/o and compute to run in parallel. And when we validated our idea, we observed this problem.
st179291
Thanks!I change thread parallel to process parallel,and the train performance becomes normal.
st179292
Hi, is it possible or neccessary to optimize the dynamic computation graph generated during training for a higher throughput?If it is, then what is the recommended way to achieve that? Thanks in advance.
st179293
Hi, This is not necessary in general. If you really want to try and get the best, you should use torchscript model with cpp inference to strip away the python interpreter.
st179294
Thank you for the reply. But my use case is to improve the training throughput, if I understand it right, torchscript can only improve performance for network inference rather than training(forward&backward).And do you have any advice about how to improve pytorch forward&backward efficiency?
st179295
You can actually perform training with torchscript. You can try to torchscript your python code during training. That being said, if your network is a regular architecture, we try to make sure that the performance for these are as good as possible out of the box.
st179296
albanD: torchscript Did torchscript can actually optimize computation graph during traing?
st179297
Yes, touchscript does optimize the graph at train time. See : https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/#writing-custom-rnns 97.
st179298
Hi, I am trying to locate ReduceOp definitions, referred in the following comment in the source. github.com pytorch/pytorch/blob/243af17d65fb147901bf9efef21b0cb3e4f25fee/torch/distributed/distributed_c10d.py#L92 1 # TODO: remove them when users are ready to take a hard dependency on PyTorch 1. _backend = Backend.UNDEFINED dist_backend = Backend class reduce_op(object): r""" Deprecated enum-like class for reduction operations: ``SUM``, ``PRODUCT``, ``MIN``, and ``MAX``. :class:`~torch.distributed.ReduceOp` is recommended to use instead. """ def __init__(self): # __members__ is a dict storing key-value pairs for enum classes for k, v in ReduceOp.__members__.items(): setattr(self, k, v) self.__members__ = ReduceOp.__members__ def __getattribute__(self, key): warnings.warn("torch.distributed.reduce_op is deprecated, please use " (I did search the git source and local installation both) I couldn’t find the class definition. Am I missing something here?
st179299
Solved by osalpekar in post #2 ReduceOp is a C++ enum, and is exposed to the python interface using pybind (https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/init.cpp#L145). That enum is defined here: https://github.com/pytorch/pytorch/blob/master/torch/lib/c10d/Types.hpp#L8
st179300
ReduceOp is a C++ enum, and is exposed to the python interface using pybind (https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/init.cpp#L145 5). That enum is defined here: https://github.com/pytorch/pytorch/blob/master/torch/lib/c10d/Types.hpp#L8 4
st179301
Under to the context of training using python front end. Where could I find some information about the total number of processes and threads when using nn.distributed.parallel module ? If I have a simple neural network (eg. MNIST) and I do distributed data parallelism where I assign 1 process per GPU, and I have both training and eval going on and a dataloader with 1 worker, should I have only 3 processes per GPU: 1 main process (the training one) that spawns eval process and dataloader process (total of 3 processes). Then within the main process: a thread for scheduling work, a thread for forward, a thread for backward, a thread to deal with eval process, a thread to deal with dataloader, a thread for cache manager. That is 6 threads. When profiling I get to see several mode. Is there any document where I can get that info ? Also if BWD is consuming what FWD is producing, is there a way I can “merge” both threads of FWD and BWD in a single thread ? Is there also a way to not dealloc objects from the cache allocator if the number of objects (tensors, model) remains the same from iteration to iteration, so I can avoid the expensive mmap/munmap ? Thanks in advance.
st179302
In terms of the total number of processes, the num_workers argument you pass to the DataLoader class determines the number of subprocess it uses (0 means the dataloader uses the main process). Here is some documentation: https://pytorch.org/docs/1.1.0/_modules/torch/utils/data/dataloader.html 32. For the number of threads, this varies based on the communication backend you use (which is passed to init_process_group). For example the gloo backend uses 2 threads per device: https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/init.cpp#L565 24.
st179303
Thanks for the quick reply. I am using NCCL backend. I do not see how many threads are being passed to create the group. at https://github.com/pytorch/pytorch/blob/master/torch/lib/c10d/ProcessGroupNCCL.cpp 3 Is there a knob to control that ? I am using dataloder with num_workers set to 1 (so main process spawns a separate process ?) Based on the information on https://pytorch.org/docs/1.1.0/_modules/torch/utils/data/dataloader.html 3. At each iteration the dataloader process is created and destroyed (if num_workers!=0) which has some overhead ? Can we keep the processes (depending on how many samples within the batch you want to work concurrently) across iterations so we do not incur into that overhead ? I am basically trying to prune the number of processes and threads, while I understand I may restrict generality but I am trying to speed up the execution when I am CPU bound. Thanks in advance.
st179304
joshua_mora: I am using NCCL backend. I do not see how many threads are being passed to create the group. at https://github.com/pytorch/pytorch/blob/master/torch/lib/c10d/ProcessGroupNCCL.cpp Is there a knob to control that ? The number of threads is currently not tunable by the user, but we’re considering making this possible in a future release. joshua_mora: I am using dataloder with num_workers set to 1 (so main process spawns a separate process ?) Based on the information on https://pytorch.org/docs/1.1.0/_modules/torch/utils/data/dataloader.html. At each iteration the dataloader process is created and destroyed (if num_workers!=0) which has some overhead ? Can we keep the processes (depending on how many samples within the batch you want to work concurrently) across iterations so we do not incur into that overhead ? Right, num_workers=1 would spawn a separate process. Here’s an issue tracking the discussion around keeping subprocesses alive across iterations (with a patch that should make this possible): github.com/pytorch/pytorch DataLoader with option to re-use worker processes 12 opened Jan 9, 2019 dashesy 🚀 Feature Currently after an epoch is ended dataloader spawns a new process to read data. This means if processes have cached... feature high priority module: dataloader triaged
st179305
Thanks for the pointer to this discussion on the dataloader. With respect to the number of threads/processes, I still miss to understand all the other threads being generated. Is it possible for example to use same thread for backward and forward if they deal with same model and batch, instead of having 2 threads that could assume using different models and samples ? NCCL process group also accepts the option of size which I am not sure it refers also to the number of threads. Are there any other hardcoded number of threads that I could reduce ? ( set_num_threads(1)/set_num_interop_threads(1) will not prevent from creating a bunch of threads, larger than 6 per process that deals with each GPU). I have very few cores available per GPU (~4) so I need to restrict the number of threads to what is necessary. Thanks, again.
st179306
joshua_mora: Is it possible for example to use same thread for backward and forward if they deal with same model and batch, instead of having 2 threads that could assume using different models and samples ? I’m not sure of any way to coerce forward and backward into using the same thread. joshua_mora: NCCL process group also accepts the option of size which I am not sure it refers also to the number of threads. That size actually refers to world_size, which is the total number of ranks in your job joshua_mora: Are there any other hardcoded number of threads that I could reduce ? ( set_num_threads(1)/set_num_interop_threads(1) will not prevent from creating a bunch of threads, larger than 6 per process that deals with each GPU). I have very few cores available per GPU (~4) so I need to restrict the number of threads to what is necessary. This might provide some more insight into tuning the number of threads: https://github.com/pytorch/pytorch/issues/16894 8. For example, the OMP_NUM_THREADS env var is used for controlling the number of OpenMP threads for CPU operations and MKL_NUM_THREADS for mkl.
st179307
Thanks @osalpekar. OpenMP environment variables dont make a difference if torch.set_num_threads is already set. In fact I use GOMP_CPU_AFFINITY to enforce a particular set of openMP threads to run on specific cores. I did play also with OMP/MKL_DYNAMIC set to false. I still do not understand though what torch.set_num_threads controls if I end up having 1 thread for FWD and 1 different thread for BWD. And several other threads. I may think that there is a total amount of work and some “empirical” definition of how many threads to have for certain amount of work. the env var will overwrite that heuristic. Just trying to find the code where that is precribed/defined. Are you aware of a document where it is described the architecture in terms of thread/functionality ? Why would I get > 6 threads if I have torch.set_num_threads set to 1. Regards.
st179308
Screenshot 2019-06-10 at 7.18.47 PM.jpg1776×1304 281 KB Screenshot 2019-06-10 at 7.19.26 PM.jpg1800×1328 327 KB So in the two images there are two different models model and model_p both being wrapped under nn.DataParallel. But in model when calling some attribute fit using the model.module method, I’m unable utilize the two GPUs I originally wanted to parallelize my model upon. i.e model doesn’t split the dim=0 batch_first dimension into two equal halves for putting it onto two devices as can be seen from the print statements. Ps. I am very new to using DataParallel and wanted to use something like this. i.e What I actually want is, to call model.module.fit in my training loop with the args as the inputs from my dataloader and in this fit attribute ultimately will makes a call to the forward method of the class model. But this whole thing doesn’t seem to parallelize and utilize the two GPUs which the model_p could without any fit function and a direct call to forward internally. I’ve added the link 6 to the notebook which was run with CUDA_VISIBLE_DEVICES=0,1 What should I change? Thanks! class Model(nn.Module): # Our model def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc = nn.Linear(input_size, output_size) def forward(self, input): output = self.fc(input) return output def fit(self, input): output = self.forward(input) print("\tIn Model: input size", input.size(), "output size", output.size()) return output model = Model(input_size, output_size) if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs model = nn.DataParallel(model) model.to(device) for data in rand_loader: input = data.to(device) output = model.module.fit(input) print("Outside: input size", input.size(),"output_size", output.size()) #############################CASE 2############################ class ModelParallel(nn.Module): # Our model def __init__(self, input_size, output_size): super(ModelParallel, self).__init__() self.fc = nn.Linear(input_size, output_size) def forward(self, input): output = self.fc(input) print("\tIn Model: input size", input.size(), "output size", output.size()) return output model_p = ModelParallel(input_size, output_size) if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs model_p = nn.DataParallel(model_p) # model.module.fit = nn.DataParallel(model.module.fit) model_p.to(device) for data in rand_loader: input = data.to(device) output = model_p(input) print("Outside: input size", input.size(),"output_size", output.size())
st179309
DataParallel splits GPUs using its custom forward function and is implemented as a wrapper rather than a subclass which overrides the model’s forward. When you’re calling fit, you’re calling the forward() associated with the model and not the one wrapped around DataParallel. Hence it will only use a single gpu, as the scatter gather in DataParallel.forward(...) is never called. From docs 9: def forward(self, *inputs, **kwargs): if not self.device_ids: return self.module(*inputs, **kwargs) for t in chain(self.module.parameters(), self.module.buffers()): if t.device != self.src_device_obj: raise RuntimeError("module must have its parameters and buffers " "on device {} (device_ids[0]) but found one of " "them on device: {}".format(self.src_device_obj, t.device)) inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) if len(self.device_ids) == 1: return self.module(*inputs[0], **kwargs[0]) replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) outputs = self.parallel_apply(replicas, inputs, kwargs) return self.gather(outputs, self.output_device)
st179310
Thanks for the answer @jerinphilip ! Makes sense why it only uses one gpu in the case of fit So is there anything I can do which can help me in parallelizing the fit method? Or the only way to parallelize a model is to call the forward from dataparallel wrapped model itself?
st179311
You can use a flag keyword argument inside forward, noticing that the two functions don’t differ by much. I tried to switch member functions using a flag and the following segment worked for me: github.com jerinphilip/MaskGAN.pytorch/blob/cdb2a7aa87826464f79273976286e90fbd3845dc/mgan/modules/distributed_model.py#L58-L70 7 def forward(self, *args, **kwargs): if 'ppl' not in kwargs: kwargs['ppl'] = False if kwargs['tag'] == 'g-step': if self.pretrain: return self._gstep_pretrain(*args, ppl_compute=kwargs['ppl']) else: return self._gstep(*args, ppl_compute=kwargs['ppl']) elif kwargs['tag'] == 'c-step': return self._cstep(*args) return self._dstep(*args, real=kwargs['real']) In my case I’m switching whether to use generator, discriminator or critic in an GAN-Actor Critic setup. I’m using tag here to control which sub-model’s forward is being called. You can see scatter's source code below to understand how args and kwargs are replicated along workers, in case there’s any confusion, which at the time I had: github.com pytorch/pytorch/blob/4e3c97a0be5c1bba04928de6abbdad31169e62ee/torch/nn/parallel/scatter_gather.py#L5-L31 9 def scatter(inputs, target_gpus, dim=0): r""" Slices tensors into approximately equal chunks and distributes them across given GPUs. Duplicates references to objects that are not tensors. """ def scatter_map(obj): if isinstance(obj, torch.Tensor): return Scatter.apply(target_gpus, None, dim, obj) if isinstance(obj, tuple) and len(obj) > 0: return list(zip(*map(scatter_map, obj))) if isinstance(obj, list) and len(obj) > 0: return list(map(list, zip(*map(scatter_map, obj)))) if isinstance(obj, dict) and len(obj) > 0: return list(map(type(obj), zip(*map(scatter_map, obj.items())))) return [obj for targets in target_gpus] # After scatter_map is called, a scatter_map cell will exist. This cell # has a reference to the actual function scatter_map, which has references # to a closure that has a reference to the scatter_map cell (because the This file has been truncated. show original
st179312
Thanks a tonne for helping me out! I found a simple way to change my code and use the parallel functionality in my forward. Everything working as expected now.
st179313
Hi, I met the same problem and want to use multi gpus even with model.module.predict, predict is a part defined in the model class. So, could you tell me the simple way you @gollum found? Thanks!
st179314
I’m trying to have different neural networks run in parallel on different CPUs but am finding that it isn’t leading to any sort of speed up compared to running them sequentially. Below is my code that replicates the issue exactly. If you run this code it shows that with 2 processes it takes roughly twice as long as running it with 1 process but really it should take the same amount of time. import time import torch.multiprocessing as mp import gym import numpy as np import copy import torch.nn as nn import torch class NN(nn.Module): def __init__(self, output_dim): nn.Module.__init__(self) self.fc1 = nn.Linear(4, 50) self.fc2 = nn.Linear(50, 500) self.fc3 = nn.Linear(500, 5000) self.fc4 = nn.Linear(5000, output_dim) self.relu = nn.ReLU() def forward(self, x): x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.relu(self.fc3(x)) x = self.fc4(x) return x def Worker(ix): print("Starting training for worker ", ix) env = gym.make('CartPole-v0') model = NN(2) for _ in range(2000): model(torch.Tensor(env.reset())) print("Finishing training for worker ", ix) def overall_process(num_workers): workers = [] for ix in range(num_workers): worker = mp.Process(target=Worker, args=(ix, )) workers.append(worker) [w.start() for w in workers] for worker in workers: worker.join() print("Finished Training") print(" ") start = time.time() overall_process(1) print("Time taken: ", time.time() - start) print(" ") start = time.time() overall_process(2) print("Time taken: ", time.time() - start) Does anyone know why this might be happening and how to fix it? I thought that it is maybe because PyTorch networks automatically implement CPU parallelism in the background and so I tried adding the below 2 lines but it doesn’t always resolve the issue: torch.set_num_threads(1) torch.set_num_interop_threads(1)
st179315
Hey @VitalyFedyunin @albanD , do you know what could cause the seemingly sequential execution? I tried the script on my laptop and saw the same behavior. When I increase the # of iterations from 2000 to 20000, it becomes even worse. 1 process Time taken: 72.73709082603455 2 processes Time taken: 229.3490858078003 Below is the CPU utilization graph. Looks like one process occupies half of the cores, and 2 process uses all of them. But still 2 processes execution time is worse than sequential. Screen Shot 2020-02-25 at 11.58.09 AM706×1458 69.4 KB
st179316
Hi, Where do you set torch.set_num_threads(1) ? You have to set it at the beginning of the Worker() function for it to have an effect on the newly created process. Checking with @mrshenli, this seems to give the right behavior after setting this. In particular, by default, pytorch will use all the available cores to run computations on CPU. So if you launch two processes to do this at once, then they will fight for the CPU and most likely slow each other down.
st179317
thanks a lot. I was setting the number of threads in the parent process rather than the worker processes. It seems to resolve it if i set them in each of the worker processes
st179318
I have a large (93GB) .h5 file containing image features on my local system and my model is trained on SLURM ADA cluster which has a storage limit of 25GB. I am trying to use torch.distributed.rpc framework for requesting image features in Dataset.getitem using remote call to rpc server on my local system. Code for initializing RPC server (local system): import os import torch.distributed.rpc as rpc def run_worker(rank, world_size): os.environ['MASTER_ADDR'] = 'XX.X.XX.XX' os.environ['MASTER_PORT'] = 'XXXX' rpc.init_rpc(utils.SERVER_NAME, rank=rank, world_size=world_size) print("Server Initialized", flush=True) rpc.shutdown() if __name__ == "__main__": world_size = 2 rank = 0 run_worker(rank, world_size) Code for RPC server for requesting data from local system (On ADA), import os import torch.distributed.rpc as rpc def run_worker(rank, world_size): os.environ['MASTER_ADDR'] = 'XX.X.XX.XX' os.environ['MASTER_PORT'] = 'XXXX' rpc.init_rpc(utils.CLIENT_NAME.format(rank), rank=rank, world_size=world_size) print("Client Initialized", flush=True) main() rpc.shutdown() if __name__ == '__main__': world_size = 2 rank = 1 run_worker(rank, world_size) In my data_loader I have specified num_worker=8, Simplified code for dataset.getitem is (On ADA), def __getitem__(self, index): .... .... print("fetching image for image_id {}, item {}".format(image_id, item), flush=True) v = utils._remote_method(utils.Server._load_image, self.server_ref, [index, self.coco_id_to_index]) return v, ...... Now in my training loop when I call enumerate(data_loader), multi-process data loading is enabled and getitem function is called num_worker times and a deadlock is reached. I am not sure why this deadlock is occuring because whenver getitem is called a remote call should be made to RPC server on my local system to request for data. How can I resolve the deadlock? Is there any other way to solve my problem if large file doesn’t fit in my ADA system, I don’t want to compromise on my latency. Edit: When I set num_worker=0 my code is working, but it is very slow 20 sec/iterations.
st179319
Hey @Kanishk_Jain Thanks for trying out RPC. multi-process data loading is enabled and getitem function is called num_worker times and a deadlock is reached. It could be because it depleted the RPC threads in the thread pool. num_send_recv_threads by default is 4. Does it work if you bump up the number of threads? Sth like: import torch from torch.distributed.rpc import ProcessGroupRpcBackendOptions from datetime import timedelta import os import torch.distributed.rpc as rpc os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '29500' options = ProcessGroupRpcBackendOptions() options.rpc_timeout = timedelta(seconds=60) options.init_method = "env://" options.num_send_recv_threads = 32 rpc.init_rpc("client", rank=0, world_size=2, rpc_backend_options=options) rpc.shutdown() The current rpc_backend_options API is too verbose and not well documented. We will improve that in the next release.
st179320
Regarding the concern on speed, hopefully, using more workers in the data loader will help to boost the throughput. We are also working on making the RPC comm layer more efficient by adding TensorPipe 9 as a new backend, so that RPC does not have to do two round trips for each message as with ProcessGroup and TensorPipe would also allow using multiple comm media.
st179321
I am using the distributed training package to train on multiple gpus. Training works fine but I would like to be able to evaluate during training either on one gpu or multiple gpus. If I directly call evaluate function during training, each model produces different results. How can I get evaluation results every certain steps while using the distributed package for training?
st179322
If you are using DDP, the model replica should be initialized in the same manner. Since DDP performs an all-reduce step on gradients and assumes that they will be modified by the optimizer in all processes in the same way, the model output should be the same. Are you also observing different outputs during training?
st179323
When I evaluate during training, it runs on all gpus and each one produce different results. I am actually running this script: github.com huggingface/transformers/blob/master/examples/run_glue.py 63 # coding=utf-8 # Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Finetuning the library models for sequence classification on GLUE (Bert, XLM, XLNet, RoBERTa, Albert, XLM-RoBERTa).""" import argparse import glob This file has been truncated. show original On line 248, it is mentioned “Only evaluate when single GPU otherwise metrics may not average well”. I don’t understand why and how to change it to be able to evaluate correctly.
st179324
Hi, I wrote a custom backward function for my model. I want to use the DataParallel package. However I have a problem. If I use model = torch.nn.DataParallel(model, device_ids=[0,1]) I get the following error: “‘DataParallel’ object has no attribute ‘backward’” I know this can be solved by using model.module.backward, but then it will only use one gpu. Is there a way to use the torch.nn.DataParallel with custom backward and attributes?
st179325
Would it be possible to return the outputs in your forward method and calculate the loss on the default device? This would be the vanilla use case, while it seems you’ve implemented backward as a class function?
st179326
Thanks for the reply. No the backward is not a separate class. It is a function inside the model class. Here is how I define it: Class myModel(): def __init__(self, config): .... def forward(...): .... def backward(...): .... And I call it this way: outputs = model(....) loss = outputs[0] if args.n_gpu > 1: loss = loss.mean() model = model.backward(...) but nn.DataParallel is not recognizing the backward and some other attributes without using module.
st179327
Thanks for the information. What’s the design decision to put the backward call inside your model? Are you using some internal parameters? If so, how are these parameters updated/used inside the model?
st179328
I needed to access the activations and activation gradients in backward. I collect activations in forward pass and access to them in backward. I use autograd backward function to calculate each layer’s backward and make the changes that I want in the process.
st179329
I tried the distributed data parallel instead of data parallel and it is working.
st179330
The following code always allocates memory on device cuda:0, but I want to use another GPU for training import torch a = torch.randn([20000, 20000]) a.pin_memory() # <- this operation allocates memory on “cuda:0”, but I want to leave it unused. b = a.cuda(‘cuda:1’)
st179331
This most likely happens because to get pinned memory, we need a cuda context. And so we initialize a cuda context on the current device when it is needed. Which would be cuda:0 here by default. Changing the device will help. Also if you never want to touch cuda:0, a good practice is to use the CUDA_VISIBLE_DEVICES=1 environment variable. This acts on the nvidia driver for the current process and hides the other GPUs. That way, you are sure that you never use them.
st179332
Hi, I have the following problem. I am trying to propagate multiple outputs out of my network which are scalars, for example, latency or memory consumption of respective layers in addition to the output itself. These outputs I would then like to add to the main loss, let’s say cross-entropy. With a single GPU, I am using a @dataclass to accumulate the respective scalar layer outputs in an accumulator that I then add to the loss, which contents I add to the main loss. However, I do have multiple GPUs that I could utilise for training and I am not sure how to propage the respective scalars and combine them such that I could use .backward(). Any help is much appreciated. Thanks.
st179333
Solved by ptrblck in post #2 If you are using nn.DataParallel the model will be replicated to each GPU and each model will get a chunk of your input batch. The output will be gathered on the default device, so most likely you wouldn’t have to change anything. However, I’m not sure about the use case. How are you calculating …
st179334
If you are using nn.DataParallel the model will be replicated to each GPU and each model will get a chunk of your input batch. The output will be gathered on the default device, so most likely you wouldn’t have to change anything. However, I’m not sure about the use case. How are you calculating the memory consumption and is this operation differentiable? I assume it’s not differentiable so that your accumulated loss will in fact just be the nn.CrossEntropyLoss.
st179335
Thank you for getting back to me. I forgot to mention that the scalars are multiplied by a parameter that I would like to learn (I am experimenting with neural architecture search). When I did some small-scale experiments, I did not observe any errors so it has to be my implementation that is wrong, nevertheless, thank you for your clarification.
st179336
I have been working on implementing distributed training for NER. In the process I implemented a version using Horovod and one using DistributedDataParallel, because I initially thought my issues were related to my implementation. Both work as expected with a public dataset. I can scale the learning rate by the number of processes (or the batch size) and I get results that are very close to the non-distributed training, yet faster. With my private dataset, which served me for testing along the way, the behavior is different. The distributed training on e.g. 4 processes performs almost exactly like when I train on a single process on 1/4 of the data but using the scaled learning rate. Debugging showed that the different processes have different losses and that the gradients are correctly syncronized in the backward pass. The only two explanations I have for this: 1) There is still something wrong in my code. 2) The gradients computed are so similar for each process that there is not much or no gain in averaging them and the result is similar to working with 1/4 of the data and a scaled learning rate. This is my first experience with distributed training so I can’t tell if 2) is reasonable and I’d be keen to know more about your experience with this.
st179337
I use the torch.distributed.launch module to multi-processing my training program. Everything seems fine but I don’t know why some process in 1-N gpu will has another memory usage in GPU 0. As depicted in the picture, the process in gpu4,6 have something in gpu0, this two usage are about 700+M memory. And sometimes other processes will also have similar behavior, but not all the other process will have memory usage in gpu0. I don’t know why this thing happen? Since the memory unbalances, the training sometimes will be close due to 'out of memory error.
st179338
I agree this can be annoying. As you see not all processes initialize this context. Is there perhaps some path in your code that conditionally initialized some memory, or sets the CUDA device manually? We don’t have any facilities that I know of to point to the culprit here, aside from simply looking at code.
st179339
Maybe, it’s because that you doesn’t set your device to load when use distributeddataparallel. loc = 'cuda:{}'.format(args.gpu) checkpoint = torch.load(SAVE_PATH, map_location=loc) adding map_location option in your main_worker will solve your problem.
st179340
Trying to train using ddp on 4 GPUs but I’m getting a: process 3 terminated with signal SIGTERM Which happens most the way through validation for some reason. Does anyone have any idea why this might happen or how I can debug it easier? File “train_gpu.py”, line 210, in main_local(hparam_trial) File “train_gpu.py”, line 103, in main_local trainer.fit(model) File “/scratch/staff/brm512/anaconda3/envs/ln1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py”, line 343, in fit mp.spawn(self.ddp_train, nprocs=self.num_gpus, args=(model,)) File “/scratch/staff/brm512/anaconda3/envs/ln1/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 171, in spawn while not spawn_context.join(): File “/scratch/staff/brm512/anaconda3/envs/ln1/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 107, in join (error_index, name) Exception: process 3 terminated with signal SIGTERM
st179341
Is the validation loop running correctly on a single device? Usually the error messages might be better when disabling multi-GPU runs and multiprocessing.
st179342
Hi all! I have 4 GPUs 1080 Ti, and when I run training inception_v3 net on multiple GPU model have strange behavior. I didnt rewrite my code much from training on 1 GPU, just add: model = nn.DataParallel(model, device_ids=[0,1,2,3]).cuda() When I run script with device_ids=[0,1] GPUs full utilized and train much faster, when I run script with device_ids=[0,1,2] or device_ids=[0,1,2, 3] script starting (GPU full utilized in nvidia-smi, but reserved memory on card small: 1Gb on first card, and 500 Mb on other) and model dont train. Where I am wrong? And I have 2 cores processor. Thanks for response. Sorry for my English
st179343
Solved by ptrblck in post #13 Thanks for the information. This points towards some communication issues between the GPUs. Could you run the PyTorch code using NCCL_P2P_DISABLE=1 to use shared memory instead of p2p access?
st179344
Does your code just hangs using all 4 GPUs or does “model doesn’t train” mean that the training loss is worse than on a single device?
st179345
With 1 or 2 GPUs model train, loss decreased, all good. With 3 or 4 GPUs sript run, but train dont work, I logged every epoch loss and accuracy. I run sript overnight, but nothing logged.
st179346
Could you try to run the code only on device 2 and 3, if 0 and 1 are working? Set the device via .to('cuda:2') or .to('cuda:3').
st179347
If I set torch.device(“cuda:2”) or torch.device(“cuda:3”), I got error: tensors must be on the same device. If I set nn.DataParallel(model, device_ids=[1,2,3]).cuda(), on first GPU (with index 0) free memory decreased (same as I run training on it) and after that raise error: tensors must be on the same device. In training block of code batch of images send to GPU (input.to(device)). May be this happen because processor have only 2 cores?
st179348
Could you use device ids 0 and 1 in your script for nn.DataParallel and launch the script via: CUDA_VISIBLE_DEVICES=2,3 python script.py args
st179349
Running script with this parameters launch script, but used GPU memory on device 2 and 3: 1Gb and 500 Mb and model dont training.
st179350
Thanks for the test. Could you run the code on a single device now and check, if it’s working on GPU2 and GPU3?
st179351
So you only see the hand when you are using nn.DataParallel with device 2 and 3? Could you run the p2pBandwidthLatencyTest from the CUDA samples?
st179352
p2pBandwidthLatencyTest works 24 hours and dont done. P2P Connectivity Matrix D\D 0 1 2 3 0 1 1 1 1 1 1 1 1 1 2 1 1 1 1 3 1 1 1 1 Unidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 352.55 0.41 0.41 0.41 1 0.39 216.53 0.39 0.39 2 0.39 0.39 349.40 0.39 3 0.39 0.39 0.39 350.65 Unidirectional P2P=Enabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 353.51 0.41 0.41 0.41 1 0.35 376.32 0.00 0.00 2 0.21 0.00 376.32 0.00 3 0.21 0.00 0.00 377.78 Bidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 374.52 0.68 0.68 0.69 1 0.69 375.24 0.67 0.67 2 0.69 0.68 374.52 0.68 3 0.69 0.68 0.68 374.70 Bidirectional P2P=Enabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 377.23 0.68 0.68 0.68 1 0.68 372.56 0.55 0.55 2 0.67 0.55 370.96 0.55 3 0.67 0.55 0.55 370.26 P2P=Disabled Latency Matrix (us)
st179353
Thanks for the information. This points towards some communication issues between the GPUs. Could you run the PyTorch code using NCCL_P2P_DISABLE=1 to use shared memory instead of p2p access?
st179354
I’ve trained WaveGlow model from here 6 with multiple GPU, but when I try to load the checkpoint to do inference (through inference.py), some checkpoints are loaded without any problem, but most of them raise the error below: Traceback (most recent call last): File "inference.py", line 105, in <module> args.sampling_rate, args.is_fp16, args.denoiser_strength) File "inference.py", line 46, in main model_state_dict = torch.load(waveglow_path, map_location="cuda:0")['model'].state_dict() File "/home/anaconda3/envs/dl/lib/python3.6/site-packages/torch/serialization.py", line 387, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "/home/anaconda3/envs/dl/lib/python3.6/site-packages/torch/serialization.py", line 581, in _load deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) RuntimeError: storage has wrong size: expected 3901634075968565895 got 512 I changed the map_location to “cpu” and “cuda” and also tried to load the checkpoint with the same number of GPUs used during training, but still get the same error. When I train the model with a single GPU, all checkpoints are loaded without any issue. This happens only after I run distributed training.
st179355
Solved by ptrblck in post #2 This usually happens when multiple processes try to write to a single file. However, this should be prevented with the if condition if rank == 0:. Did you remove it or changed the save logic somehow?
st179356
This usually happens when multiple processes try to write to a single file. However, this should be prevented with the if condition if rank == 0:. Did you remove it or changed the save logic somehow?
st179357
Yes, exactly! it was a simple mistake by me. I commented the original “save_checkpoint” section and only added “save_checkpoint” after the epoch loop without checking if rank==0. Now it works without any errors. Thanks a lot for your help!
st179358
I was wondering in such a case is the checkpoints still salvageable or are they simply damaged?
st179359
If multiple process have written to the same file, it’s most likely damaged and you won’t be able to restore it.
st179360
My aim is to get a linear layer with large output dimension. To achieve this I store the weights of the linear layer in an embedding layer. Further I need to forward and backward only on some connections of the fully connected layer(hence the “shortlist”). Since the output size is large, I divide the embedding layer onto 2 GPUs. Relevant parts of the code: class SparseLinear(nn.Module): def __init__(self, num_labels, hidden_size, device_embeddings): super(SparseLinear, self).__init__() self.device_embeddings = device_embeddings self.input_size = hidden_size self.output_size = num_labels self.weight = Parameter(torch.Tensor(self.output_size, self.input_size)) if bias: self.bias = Parameter(torch.Tensor(self.output_size, 1)) else: self.register_parameter('bias', None) self.reset_parameters() self.sparse = True # Required for optimizer def forward(self, embed, shortlist): short_weights = F.embedding(shortlist, self.weight, sparse=self.sparse) out = torch.matmul(embed.unsqueeze(1), short_weights.permute(0, 2, 1)) short_bias = F.embedding(shortlist, self.bias, sparse=self.sparse) out = out + short_bias.permute(0, 2, 1) del short_weights return out.squeeze() class DividedLinear(DeepXMLBase): def __init__(self, <params>): # Say I have output size of 1000000, and I divide it into two 2 parts self.label_partition_lengths = [(500000, "cuda:0"), (500000, "cuda:1")] self.classifier = [SparseLinear(num_labels, 300, torch.device(device_name)) for num_labels, device_name in self.label_partition_lengths] <init other params> def encode(self, batch_data): return self.transform(batch_data["doc_embeddings"].to(self.device_embeddings)) # is some network to transform embeddings def forward_with_error_calc(self, batch_data, criterion): print("before", torch.cuda.memory_allocated(1) / (1024 * 1024 * 1024), torch.cuda.memory_allocated(2) / (1024 * 1024 * 1024)) encoded = self.encode(batch_data) device_embeddings = [torch.device(num_labels_device[1]) for num_labels_device in self.label_partition_lengths] shortlists = [x.to(device_embeddings[i]) for i, x in enumerate(batch_data["shortlist"])] encoded_replicate = [encoded.to(device_embeddings[i]) for i in range(len(device_embeddings))] outputs = nn.parallel.parallel_apply(self.classifier, list(zip(encoded_replicate, shortlists))) targets = [batch_data["shortlist_weights"][i].to(device_embeddings[i]) for i in range(len(device_embeddings))] errors = nn.parallel.parallel_apply(nn.parallel.replicate(criterion, device_embeddings), list(zip(outputs, targets))) errors_gather = nn.parallel.gather(errors, target_device=device_embeddings[0]) total_error = errors_gather.sum() print("after", torch.cuda.memory_allocated(1) / (1024 * 1024 * 1024), torch.cuda.memory_allocated(2) / (1024 * 1024 * 1024)) for output in outputs: del output for target in targets: del target for x in shortlists: del x for x in encoded_replicate: del x torch.cuda.empty_cache() print("after del", torch.cuda.memory_allocated(1) / (1024 * 1024 * 1024), torch.cuda.memory_allocated(2) / (1024 * 1024 * 1024)) return total_error But the GPUs run out of memory after some batches. Particularly, I observe behavior like this: before 5.049468994140625 5.049465179443359 after 5.1367316246032715 5.1367268562316895 after del 5.1367316246032715 5.1367268562316895 before 5.136678695678711 5.1366729736328125 after 5.223941326141357 5.223934650421143 after del 5.223941326141357 5.223934650421143 So these seems to be some leakage in the forward_with_error_calc function, but I can’t figure out what it is. Can someone please help me in figuring this out? TIA.
st179361
Hi all, I want to update the weights if the loss value is less than some threshold. It works okay for the single-gpu case but gets halted (or sometimes throw gpu memory error) when using “DistributedDataParallel” on a single node. Here is an example to reproduce the error. Can you folks help me to figure out this problem? import os from datetime import datetime import argparse import torch.multiprocessing as mp import torchvision import torchvision.transforms as transforms import torch import torch.nn as nn import torch.distributed as dist def main(): parser = argparse.ArgumentParser() parser.add_argument( "-n", "--nodes", default=1, type=int, metavar="N", help="number of data loading workers (default: 4)", ) parser.add_argument( "-g", "--gpus", default=1, type=int, help="number of gpus per node" ) parser.add_argument( "-nr", "--nr", default=0, type=int, help="ranking within the nodes" ) parser.add_argument( "--epochs", default=2, type=int, metavar="N", help="number of total epochs to run", ) args = parser.parse_args() args.world_size = args.gpus * args.nodes os.environ["MASTER_ADDR"] = "tcp://127.0.0.1" os.environ["MASTER_PORT"] = "23456" mp.spawn(train, nprocs=args.gpus, args=(args,)) class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), ) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), ) self.fc = nn.Linear(7 * 7 * 32, num_classes) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out def train(gpu, args): rank = args.nr * args.gpus + gpu dist.init_process_group( backend="nccl", init_method="tcp://127.0.0.1:23456", world_size=args.world_size, rank=rank, ) torch.manual_seed(0) model = ConvNet() torch.cuda.set_device(gpu) model.cuda(gpu) batch_size = 100 # define loss function (criterion) and optimizer criterion = nn.CrossEntropyLoss().cuda(gpu) optimizer = torch.optim.SGD(model.parameters(), 1e-4) # Wrap the model model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu]) # Data loading code train_dataset = torchvision.datasets.MNIST( root="./data", train=True, transform=transforms.ToTensor(), download=True, ) train_sampler = torch.utils.data.distributed.DistributedSampler( train_dataset, num_replicas=args.world_size, rank=rank ) train_loader = torch.utils.data.DataLoader( dataset=train_dataset, batch_size=batch_size, shuffle=False, num_workers=0, pin_memory=True, sampler=train_sampler, ) start = datetime.now() total_step = len(train_loader) for epoch in range(args.epochs): for i, (images, labels) in enumerate(train_loader): images = images.cuda(non_blocking=True) labels = labels.cuda(non_blocking=True) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() # Get halts here if loss.item() > 1.8: loss.backward() else: print("skipping batch:", loss.item()) optimizer.step() print("GPU:{}, Epoch [{}/{}], Step [{}/{}], Loss: {}".format(gpu,epoch + 1, args.epochs, i + 1, total_step, loss)) if gpu == 0: print("Training complete in: " + str(datetime.now() - start)) if __name__ == "__main__": main()
st179362
if loss.item() > 1.8: loss.backward() else: print("skipping batch:", loss.item()) The above might be the cause of the problem. When using DistributedDataParallel, backward() pass will trigger gradient synchronization communication (all_reduce) across all processes, meaning that all processes need to agree on the number and order of all_reduce calls. However, the above code seems to skip the backward pass in some process but not guarantee to skip in other processes? If that is the case, then processes could run in to desync and cause hang.
st179363
Thanks for the explanation. But, I want if the loss in any process exceed some threshold then no process should should do the gradient update. Is is achievable when using DistributedDataParallel ?
st179364
When using DistributedDataParallel (DDP), loss is a local var. DDP will not communicate loss across processes. In order to make this work, you can do the following on each process run forward on DDP model to calculate loss create tensor to represent whether the loss is larger than a threshold. use all_reduce or all_gather to collectively communicate this information to all processes. After 3, all processes will have the same view on whether they should launch backward+step or not, and hence they can avoid run into desync problems now.
st179365
Hi, Can somebody answer pls the following questions can I create in a model and custom data iterator inside the main_method will there be 4 data sets loaded into the RAM / CPU memory? will each “for batch_data in…” iterate independently will the model be updated e.g. every independed batch operation. Obviously I don’t want to have four independed models. What’s the process flow in this case… when gradients are updated etc? I have seen this solution but it uses DataLoader (not a custom iterator) and the model is instantiated before the train method is called. - https://github.com/pytorch/examples/tree/master/mnist_hogwild 1 class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.l1 = nn.Linear(100, 50) self.l2 = nn.Linear(50, 2) def forward(self, x): return self.l2(self.l1(x)) class CustomDataClassIterator(): def __init__(self): self.data = None self.batch_size = 10 def __iter__(self): while True: yield def main_method(i, args): print(i, datetime.datetime.now()) model = Net() data = CustomDataClassIterator() for epoch in args.epoch_n: for batch_data in data: pass # some stuff if __name__ == '__main__': args = {'test': 10} torch.multiprocessing.spawn(fn=main_method, args=(args), nprocs=4)
st179366
can I create in a model and custom data iterator inside the main_method? Given the above example, you created a generator to produce input data? If that is the case, yes, sure you can do that. will there be 4 data sets loaded into the RAM / CPU memory? I am assumingg pass # some stuff statement will be replaced by actual forward-backward-step functions? If that is the case, then the 4 date sets won’t be loaded into memory at the same time. Instead, each data set will no longer be needed and can be gc-ed at the end of every iteration. will each “for batch_data in…” iterate independently Yes. It will have it’s own forward pass (building autograd graph), backward pass (generating grads and sync them if necessary), and step function (updating params) will the model be updated e.g. every independed batch operation. Obviously I don’t want to have four independed models. What’s the process flow in this case… when gradients are updated etc? When you call backward, the gradient will be accumulated into Tensor.grad, and it is up to you regarding when to call Optimizer.step() to apply those grads into the parameter.
st179367
I saw you had a pointer to the hogwild training example. Could you please elaborate your use case? Are you looking for distributed data parallel training (like nn.parallel.DistributedDataParallel) or specifically asking for hogwild?