id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st177568 | @ptrblck Thanks a lot for this. It works after your suggestion. I will create a GitHub issue later.
I have a further question regarding saving and loading wrapped model.
In this project, I will need to save the trained model and load it for use later.
If I wrap the model like this:
wrapped_contact_net=torch.nn.DataParallel(contact_net)
I am supposed to save it like this?
try:
state_dict = wrapped_contact_net.module.state_dict()
except AttributeError:
state_dict = wrapped_contact_net.state_dict()
torch.save(state_dict, model_path)
If I want to retrain the trained model, do I have to wrap the model name before I load?
wrapped_contact_net=torch.nn.DataParallel(contact_net)
wrapped_contact_net.load_state_dict(torch.load(model_path,map_location=device))
or wrap the model after I load?
contact_net.load_state_dict(torch.load(model_path,map_location=device))
wrapped_contact_net=torch.nn.DataParallel(contact_net)
Thanks in advance. |
st177569 | I would store the model.module.state_dict() to make it independent from nn.DataParallel.
This would also mean your second approach is correct, i.e. create the model, load the state_dict, wrap it in nn.DataParallel. |
st177570 | @ptrblck Thanks,I am OK with loading now. As for saving, do you mean I should save with the original model name, like:
state_dict = contact_net.module.state_dict()
instead of using the wrapped model:
state_dict = wrapped_contact_net.module.state_dict() |
st177571 | You should use the nn.DataParallel object, so wrapped_contact_net in your case (I just used model as a placeholder, as usually you would just override the model variable). |
st177572 | I‘m training a DDP model with 2x2080ti.
I found that Rank0 had always slowed down at night.
The batch time of Rank0 can always increase at about 23:00 and decrease at about 8:00.
How to solve this problem? Thanks |
st177573 | Hey @kaka_zhao, this is very interesting finding. We didn’t have any time-based algorithm in DDP. Is there any other user share the same cluster with you? I wonder if it is possible that some recurrent job is kicked off every day at 23:00, which will compete for resources on network/GPU with your job? |
st177574 | Thanks for your reply. I have found a process about Xorg would be running at night. Now I think that this problem may be attributed to my remote access via VNC. |
st177575 | Hello all,
I am doing distributed training using model = nn.DataParallel(model, device_ids=[0, 1]). Is there any option to indicate how much fraction of memory usage should which GPU or something like that? Because both the GPUs which I have might not be equally occupied and hence, when I do distributed training, one might OOM.
Thanks,
Megh |
st177576 | I don’t believe there is any such option. In DataParallel (and DistributedDataParallel), every GPU will have a replica of the model for local training, and every GPU will see input batches of the same size. Thus the amount of memory used by each rank (each GPU) in distributed training is approximately the same. If one of your GPUs is OOMing, you can try to:
reduce the batch size (though this will reduce the batch size for every rank, even on the GPU with enough memory)
use an optimizer that stores less local state (such as SGD as opposed to Adam) |
st177577 | Thank you @osalpekar. But, if the batch size remains the same across the multiple GPUs, where is the distribution of training happening? I mean which part of the memory load on a single GPU is getting split across multiple GPUs? Please let me know if I am missing something. |
st177578 | Each Data Loader pairs with one DDP instance. So if you define a batch size of 64 in your data loader, each replica trains on a size-64 batch. When training on 2 GPUs, this essentially means you train on 64*2=128 samples in one iteration. Each replica performs the forward pass independently on their separate batches, then in the backward pass, they communicate gradients computed with each other and take the average of them, so each replica has gradients as if they trained on an entire 128-sample batch themselves. Finally, each replica performs the optimization step using the averaged gradients, so they end up with the same model weights at the end of each iteration.
In essence, the speed-up comes from the fact that with n GPUs, you are able to train on n times as much data in each iteration. For this reason, the memory load on a single-node model and a distributed model are essentially the same (there may be small overheads for synchronizing the gradients but these are unlikely to influence which model/hyperparameters are chosen). |
st177579 | Hi, I have a question about DDP computing average. I read in DDP backward pass 3 that says when all buckets are ready, local Reducer will block waiting for all allreduce to opertions to finish. But what will happen when several GPU run in different speed? For example, when two GPUs(e.g. cuda:0 and cuda:1) run 1.5x faster than other GPUs(codes and processing are the same), cuda:0 and cuda:1 will produce more gradients. Will they save these gradients in the bucket and wait for other GPUs to get ready, or they just abandon these gradients and reducer gradients that are ready in all GPUs?
Thanks a lot. |
st177580 | Solved by osalpekar in post #3
Just to add some more insight to this, we have a bucket_cap_mb argument in the DDP constructor. This defines the size of a gradient bucket in megabytes. During the backward pass, each rank fills the bucket with gradients and then kicks off the allreduce collective. Faster ranks will kick off the all… |
st177581 | The faster GPU processes will wait for other GPUs to finish their backward computation.
By default, DDP synchronizes gradients and parameters and then performs the next forward computation.
The differing speed case is quite common in practice.
You can take a look at the forward function of DDP.
github.com
pytorch/pytorch/blob/master/torch/nn/parallel/distributed.py#L675 2
# Calling _rebuild_buckets before forward compuation,# It may allocate new buckets before deallocating old buckets# inside _rebuild_buckets. To save peak memory usage,# call _rebuild_buckets before the peak memory usage increases# during forward computation.# This should be called only once during whole training period.if self.reducer._rebuild_buckets(): logging.info("Reducer buckets have been rebuilt in this iteration.")if self.require_forward_param_sync: self._sync_params()if self.ddp_uneven_inputs_config.ddp_join_enabled: # Notify joined ranks whether they should sync in backwards pass or not. self._check_global_requires_backward_grad_sync(is_joined_rank=False)if self.device_ids: if len(self.device_ids) == 1: inputs, kwargs = self.to_kwargs(inputs, kwargs, self.device_ids[0]) output = self.module(*inputs[0], **kwargs[0]) else: |
st177582 | Just to add some more insight to this, we have a bucket_cap_mb argument in the DDP constructor. This defines the size of a gradient bucket in megabytes. During the backward pass, each rank fills the bucket with gradients and then kicks off the allreduce collective. Faster ranks will kick off the allreduce collective earlier than the slower ranks, so they will just block until the slower ranks kick off the collective. No gradients are abandoned in this process. You can tune the bucket_cap_mb as desired, and this will trigger allreduce more frequently for smaller buckets and less frequently for larger buckets.
If the performance difference is too great, you can explore syncing gradients less frequently (every n batches instead of every batch) using the model.no_sync() context manager or using multiple process groups (using the new_group API). If you find DDP training getting stuck due to excessively long hang-times due to these blocked collectives, you may look into using torchelastic and some mechanism to timeout hanging collectives (such as NCCL_ASYNC_ERROR_HANDLING or NCCL_BLOCKING_WAIT - docs here 4) |
st177583 | Hello,
I have a strange issue when loading checkpoints from a long running training process. Before I create an issue on Github about this I wanted to know if anyone has encountered something like this, or that it maybe is a known issue?
I have a training process that has been running for about 22 days now (Pytorch 1.6, DistributedDataParallel using 4 GPUs, Pytorch native mixed precision training). I checked a saved checkpoint after about 8 days and after loading it, it worked well (inference results made sense). Now after 22 days, I loaded the most recent checkpoint and got inference results that made no sense.
So I started comparing the parameters in both checkpoints. What I found was that in the ‘faulty’ checkpoint the first two values of each parameter tensor are close to zero, while this is not the case in the good checkpoint. Please see the attached images.
Is any issue like this known to you?
Good checkpoint:
good-checkpoint-221120202688×1954 892 KB
Faulty checkpoint:
faulty-checkpoint-221120202702×1958 939 KB
Note: just to be clear, the faulty checkpoint was from much later in the training process and should have a much lower loss (as was calculated during training). |
st177584 | What was the training and validation accuracy of the “fauly” checkpoint before storing it?
If I understand your issue correctly, you are seeing worse results when reloading the checkpoint than during training? |
st177585 | Hi @ptrblck,
Thanks for responding. This is not an overfitting issue for sure, but to provide you some insight in to the model performance I added some of the tensorboard graphs at the end. Yes, the issue occurs when I load the checkpoint.
To provide some more context, this is an open-domain chatbot model. I trained these kinds of models using Pytorch many times before and I know the difference between when such a model is not trained well in some way, and when there is really something technically wrong. When it is not trained well (under-fitted or over-fitted), you can still make a conversation with it, but it makes not much sense, and/or you might see glitches in the text generation. In the case of the ‘faulty’ checkpoint the model simply outputs garbage, without any structure at all.
Further, it is very strange that all parameter Tensors, over all layers(!), start with the same kind of values (zero, or very close to zero). This can’t be right.
Some more details; chat_gru and conversation_gru are actually not GRUs, but Simple Recurring Units (SRUs), also see https://github.com/asappresearch/sru.
Because I started my experiment using the SRUs for the first time, I thought maybe the issue is related to using the SRUs. But if that is the case, why would other layers (e.g. feed-forward layer out) also have the first two parameters close to zero? Could it be that on the C level, memory gets overwritten? (SRU has a C/CUDA implementation)
When I started the experiment, I also started using Pytorch 1.6.0, so maybe it is related to that, I can’t really tell.
Below are some screenshots of my Tensorboard for this experiment:
Moving average of cross entropy loss for training and validation set at earlier “good”/“functioning” checkpoint:
good-checkpoint-performance-241120202888×1346 525 KB
Moving average of cross entropy loss for training and validation set at later “faulty”/“garbage-out” checkpoint:
faulty-checkpoint-performance-241120202932×1438 550 KB
Cross entropy over whole training set, calculated once per epoch:
epoch-training-cross-entropy-before-faulty-checkpoint-241120201206×688 44.1 KB
Cross entropy over whole validation set, calculated once per epoch:
epoch-validation-cross-entropy-before-faulty-checkpoint-241120201160×654 46.1 KB
Last note : I use a dropout of 0.2, this is why the training loss is higher then the validation loss. |
st177586 | Did you check the values in the good run and saw that they are not close to zero?
I would start with a comparison between the “good” model after training vs. the model after reloading.
Also, could you use the latest nightly binary, as I’ve isolated a checkpoint issue a few weeks ago (which should be fixed by now), which corrupted the loading of state_dicts if CUDATensors were stored. |
st177587 | Hey @ptrblck,
Did you check the values in the good run and saw that they are not close to zero?
Yeah, that’s is exactly what I did, as explained in my initial post. In the attached images of my first post you can see parameters of the ‘good’ checkpoint and the ‘faulty’ checkpoint. As you can see the parameter tensors in the good checkpoint do not start with values close to zero.
Also, could you use the latest nightly binary, as I’ve isolated a checkpoint issue a few weeks ago (which should be fixed by now), which corrupted the loading of state_dict s if CUDATensors were stored.
Will do! Is there a merge request and/or a related issue I can read about this issue? |
st177588 | I see that this fix was merged in to pytorch:release/1.7 on the 12th of October, the 1.7.0 release was on the 23rd of October. So, I can assume that your fix landed in Pytorch 1.7.0, correct? |
st177589 | I am training a model to segment 3D images in a slice by slice-fashion. To distribute over multiple GPUs I am using DistributedDataParallel and I use DistributedSampler to split the dataset across the GPUs.
During prediction of new cases, I use a similar Dataset and DataLoader setup and I basically can gather a dictionary like: {'filename': [(slice_no_1, slice_1_pred), (slice_no_2, slice_2_pred)], ...} which I can subsequently sort on the first index to get an output. However, when I use DistributedSampler the slices are distributed along two GPUs, and I therefore end up with two dictionaries which most likely are both incomplete (one containing slices of the other and vice versa).
How do I gather these two dictionaries? As I preferably cast the predictions to a numpy array, it might be most convenient to gather these in the CPU memory. |
st177590 | Hi @jteuwen,
I’m not sure I understand your issue, but I’ll give it a shot. You’re doing validation of your model and you’re using a distributed sampler on the validation set. This means you have a partial result on each process, and you’re looking to combine them into a single result, for the final accuracy numbers?
Or… perhaps the sampler splits the array of slices for a single filename across GPUs and you want to keep the predictions for a single filename on a single process? |
st177591 | Hi @pietern
No, training and validation is done on a slice-by-slice basis while the data are 3D MRI images. My Dataset outputs a dictionary with the data including a key which says to which file the slice belongs, and what the index of the slice is. I use the same setup for predicting new cases. However, for that I would like to recombine data again into a 3D volume. When you have one GPU that is fine: when processing a complete dataset you can combine based on the dictionary key denoting the filename and the slice number.
When doing this with a DistributedSampler, you have multiple python processing having a part of the dataset. In this case, even when the sampling is sequential, it can be that part of the slices end up in one process, and the others in another process. To recombine them I would need to have access in the process with rank 0 to all dictionaries of the other processes containing the slices.
Solutions I have come up with now:
Dump each slice to disk and when done, combine them in process rank 0
Use a memorymap to do the same thing (but can do with pickled dictionaries)
Use something such as Redis to store the results in. Extra bonus is that it would be easier to distribute as we already use Redis.
However, that seems to be quite convoluted for a reasonably simple problem. I could change the Dataset classes and the sampler specifically for this purpose, but that has the disadvantage that (1) if I change something in the dataset / dataloader I would need to change in two places, and a source for bugs (2) also tricky to implement a multiGPU model which scales well across a cluster. |
st177592 | I understand, thanks for the clarification.
You can use existing torch.distributed primitives gather or all_gather to get all results to a single or all processes, respectively. You say you’re outputting dictionaries, so you can’t do it with functions in core yet, and would need to serialize the dictionaries yourself. Coincidentally, @mrshenli and I were talking about adding this to core yesterday, and he created an issue to track it: https://github.com/pytorch/pytorch/issues/23232 42. This doesn’t solve your issue right now though.
To solve this today, I would indeed write everything to a shared filesystem if you have one (with torch.save), probably named after the rank of the process, run a barrier to ensure all writes are done, and then torch.load all of them on the process where you want to collect the results. |
st177593 | Thanks for your reply - that does seem like a good addition to the code base. By the way: if I would use the torch.distributed primitives, then since nccl is the backend, wouldn’t that transfer through the GPU memory (since nccl does not support CPU ipc)? That might be unconvenient, also for my use case. |
st177594 | That’s correct. It would require serialization on the CPU side, copy to GPU memory to perform the collective, copy back to CPU memory, and then deserialization. Alternatively, you can create a separate Gloo group and use that, with torch.distributed.new_group 18. |
st177595 | @jteuwen I have a question for you. Are you loading a single 3D file that then gets sliced and passed into training a model? If this is the case one question I have is that when doing multiGPU training via DDP, could you not run into a situation where multiple processes would get different slices that originate from the same 3D file and would therefore want to access the file at the same time? |
st177596 | @solarflarefx Apologies that I missed your question.
It might be possible that in this way multiple processes try to access the same volume. What I do is make a BatchSampler, which takes this into account and tries to split the volumes over the different processes instead (using the rank). |
st177597 | Hi,
I am using PyTorch through Anaconda Environment and something weird happens. While working or if I leave the machine for some time and come back, PyTorch stops recognizing the GPU. And the only way it starts recognizing the GPU is after rebooting the machine.
Why does this happen? |
st177598 | Solved by user_123454321 in post #4
This happens to me sometimes and to fix without rebooting I reload gpu using
$ sudo rmmod nvidia_uvm
$ sudo modprobe nvidia_uvm
No idea why it happens though |
st177599 | You mean torch.cuda.device_count() returns 0? Can you confirm nvidia-smi still works correctly when that happens? And can you also check what is the value for CUDA_VISIBLE_DEVICES env var? |
st177600 | Hi,
Yeah. torch.cuda.device_count() returns 0 and torch.cuda.current_device() returns the following:
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCGeneral.cpp line=47 error=999 : unknown error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/anaconda3/envs/work/lib/python3.8/site-packages/torch/cuda/__init__.py", line 330, in current_device
_lazy_init()
File "/home/user/anaconda3/envs/work/lib/python3.8/site-packages/torch/cuda/__init__.py", line 153, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (999) : unknown error at /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCGeneral.cpp:47
nvidia-smi works. For CUDA_VISIBLE_DEVICES, I get nothing. |
st177601 | This happens to me sometimes and to fix without rebooting I reload gpu using
$ sudo rmmod nvidia_uvm
$ sudo modprobe nvidia_uvm
No idea why it happens though |
st177602 | This works. How did you find out this solution? It’s so weird right. Suddenly it stops working. I think there’s some internal functioning of pytorch that changes something |
st177603 | Hmmm…since it worked on rebooting my laptop, I guessed it should work by just reloading the gpu. So, searched online on how to reboot nvidia gpu. |
st177604 | Thanks for the solution. This has been bothering me for quite some time now. I’m sure they’ll fix this in later versions. |
st177605 | Flock1:
I think there’s some internal functioning of pytorch that changes something
Are any other CUDA applications running fine, i.e. are you able to run some CUDA examples etc.?
I’m not sure, if this is a PyTorch-related issue or rather a CUDA/NVIDIA driver issue. |
st177606 | I didn’t check that unfortunately. I can try checking with Keras if that lobrary is unable to recognize the GPU.
I’ll also try running CUDA examples from within the environment and outside it. |
st177607 | Hello Flock!
Flock1:
While working or if I leave the machine for some time and come back, PyTorch stops recognizing the GPU. And the only way it starts recognizing the GPU is after rebooting the machine.
Just to share my experience (with an old version of pytorch and an
old gpu):
I see something similar to this. If I launch a fresh python session
and run a pytorch script that uses cuda, then if I don’t use cuda
(or maybe just the python session) for a short-ish amount of time,
future use of cuda in that python session fails.
But I don’t have to reboot my machine or “reload the gpu” to get
it working again; I only have to exit and restart python.
I haven’t found any fix for it – I just live with it, restarting python as
necessary.
Here’s a post of mine with some related observations:
Some observations on "cuda runtime error (30)"
Hello Forum!
I have some information about the behavior of “cuda runtime
error (30)” (probably somewhat specific to my particular
configuration).
This is a follow-on to a number of threads about “error 30,”
and, in particular, to this post:
Clued in by Andrei’s observation that torch.cuda.is_available()
“breaks” cuda, I find (for me) that if torch.cuda.is_available()
is the first cuda call, subsequent cuda calls will throw “error 30”
unless the first subsequent call is called promptly…
Best.
K. Frank |
st177608 | Can you please share the versions of PyTorch and CUDA you are using (and perhaps a GPU type)?
Also, are there any messages printed to the kernel log (can be checked by running dmesg) when this happens? |
st177609 | Hey Frank,
Thank you for sharing your experience.
I see something similar to this. If I launch a fresh python session
and run a pytorch script that uses cuda, then if I don’t use cuda
(or maybe just the python session) for a short-ish amount of time,
future use of cuda in that python session fails.
This is what happens but unfortunately for me, I have either have to restart the machine or reload the GPU. Just restarting python didn’t help. I even tried reloading the conda environment. It’s as if a switch went off and I have to physically switch it on again. |
st177610 | I use PyTorch through conda environment.
PyTorch: 1.5.1
Cuda tool kit: 10.1.243
On my machine, I have CUDA 11 for RTX 2070 Super GPU |
st177611 | Does working from an anaconda environment affect this? Because the environment won’t use the CUDA innstalled on the machine but the one dowanloded by anaconda itself. |
st177612 | As you said, the cudatoolkit from the conda binaries will be used and your local CUDA11 installation will thus not be used.
What do you mean by “affect this”? |
st177613 | I was referring to pytorch suddenly stops recognising thr GPU by ‘affect this’. So what I wanted to ask if is how to check which CUDA is causing the problem. The one that was installed with anaconda (cudatoolkit) or the one that’s locally installed (Cuda 11). |
st177614 | Thanks for the explanation. As said, the cudatoolkit (shipped via the binaries) will be used.
However, I doubt that CUDA is responsible for this behavior and would first look into potential hardware, driver, PSU issues.
You could check dmesg for any XID errors. |
st177615 | user_123454321:
$ sudo rmmod nvidia_uvm
$ sudo modprobe nvidia_uvm
Worked! such a waste of time. |
st177616 | Today I want to do distributed training in many nodes, however the memory decrease rapidly.
I execute the command 'dstat -c -m '.We can see the available memory decrease from 64GB to 177MB.
My batch_size is 1 and datasets include 4000 samples,every sample is about 16MB.I use the gloo as the backend.
data_loader code like this
class HDF5Dataset(data.Dataset):
"""Represents an abstract HDF5 dataset.
Input params:
file_path: Path to the folder containing the dataset (one or multiple HDF5 files).
recursive: If True, searches for h5 files in subdirectories.
transform: PyTorch transform to apply to every data instance (default=None).
divide: use only 1//divide dataset
"""
def __init__(self, file_path, recursive, transform=None, divide =1):
super().__init__()
self.data_info = []
self.data_cache = {} #dict
self.transform = transform
self.length = 0
self.divide = divide
# Search for all h5 files
p = Path(file_path)
assert(p.is_dir())
if recursive:
files = sorted(p.glob('**/*.hdf5'))
else:
files = sorted(p.glob('*.hdf5'))
if len(files) < 1:
raise RuntimeError('No hdf5 datasets found')
self.length = len(files) // divide
self.data_info = files[0: self.length]
#print("File Prepared !\n")
def __getitem__(self, index):
# get data
y, x = self.get_data(index)
if self.transform:
x = self.transform(x)
else:
x = torch.from_numpy(x)
x = torch.reshape(x, (1,128,128,128))
x = x.type(torch.FloatTensor)
# get label
y = torch.from_numpy(y)
y = y.type(torch.FloatTensor)
return (x, y)
def get_data(self, i):
fp = self.data_info[i] #list - dict
#print(fp)
try:
with h5py.File(fp,'r') as h5_file:
label = h5_file.get('label')[()]
dataset = h5_file.get('dataset')[()]
return (label, dataset)
except IOError:
print('Error:File {filepath} is broken.'.format(filepath=fp))
i=i+1
return self.get_data(i)
def __len__(self):
return self.length
I want to know why pytorch DDP training need more memory with the nodes expand. And how to reduce memory usage? |
st177617 | What is the world_size in your application? Each process will have its own model replica, so the total memory consumption is expected to be larger than world_size X (non-DDP memory footprint). Besides, DDP also creates buffers for communication, which will contribute another ~1.5X if not considering optimizer. |
st177618 | Thanks for your answer.I set the parameter nproc_per_node to 1 and I guess the world_size is 1.There is only one process per node.However, it still consumes a lot of memory. DDP adopts the data parallelism, I think when I train this model in more nodes, there is little memory to be consumed.
I found that the node cache always uses more than 20GB of memory and if I can not reduce the memory usage,I should choose another framework to do distributed training = = ! |
st177619 | Hey @khalil
How large is your model?
My batch_size is 1 and datasets include 4000 samples,every sample is about 16MB.
Does this actually mean the memory is consumed by the data loader? Can you check the memory consumption when using the same model and same data loader without DDP? |
st177620 | Thank you,@mrshenli. I have trained this model in one node and it consumed 30GB on the cache.Just like this,
.
I guess there are so many data to be loaded.Could you help me check my data_loader code? |
st177621 | Hey @khalil
I am trying to identify which component hogs memory. Could you please share some more details about the sentence below.
I have trained this model in one node and it consumed 30GB on the cache
By “trained this model in one node”, do you still use DDP and the same data loader? It will be helpful to know the memory footprint of the following cases:
Train a local model without data loader and without DDP. Just feed the model with some randomly generated tensors.
Train a local model with data loader but without DDP.
Wrap the model with DDP, and train it with data loader. |
st177622 | I have created a small dataset which only contains 30 samples.However the memory still decreased rapidly.(batch_size=1)
So I think this issue is not caused by dataloader and I am sure reason is the node expansion.
When I do DDP training in one node,the memory problem is not serious.When I expand the nodes,the memory problem is serious.I want to know if this is pytorch’s problem?
My model like this
image1139×413 72.2 KB |
st177623 | Hey @khalil
Could you please provide details of the following question?
Which version of PyTorch are you using?
What’s the size of your model (how many GB)? You can get this by doing the following:
model_size = 0
for p in model.parameters():
model_size += p.numel() * p.element_size()
Given the picture shown in the previous post, it looks like the model is about 27GB? If that is the case, then, yes, DDP would use another 27GB as comm buffers. We are working on improving this: https://github.com/pytorch/pytorch/issues/39022 8
I am curious how did you train this model locally without DDP? If the model is 27GB, after the backward pass, the grads will also consume 27GB. So local training without DDP will also use >54GB memory?
And if your optimizer uses any momentum etc., it’s likely the optimizer will also consume a few more X of 27GB. But looks like there are only 64GB memory available? Is it because your optimizer does not contain any states?
Does this getting worse when you use more nodes (e.g., scale from 2 nodes to 3 nodes)? |
st177624 | @mrshenli,Thanks for your general help.The details are as follows:
The version of PyTorch is 1.5.0
I use your code to test the size of my model and it printed this sentence: model_size is 9865484.So I think the size of model are 10MB…
When I scaled from 2 nodes to 4 nodes,the available memory started to decrease.The memory usage went up with the number of nodes increased. |
st177625 | I use your code to test the size of my model and it printed this sentence: model_size is 9865484.So I think the size of model are 10MB
In this case DDP should only consume 10MB more memory for communication buffers.
BTW, as the model is just 10MB, do you know why even without DDP it consumes 30GB? And have you seen OOM errors?
When I scaled from 2 nodes to 4 nodes, the available memory started to decrease.The memory usage went up with the number of nodes increased.
This is also not expected with DDP. DDP’s memory footprint should be constant, regardless of how many nodes are used.
Do you have a minimum repro that we can investigate? |
st177626 | Dear @mrshenli,forgive me for not being clear,I said it consumes 30GB which means the cache
consumes 30GB.I guess the memory is used to load the disk data.I tried to use a small datasets
which only contains 40 samples(Every sample is 16MB).And in this case,the cache consumption is about 2.5GB.However,the memory problem still exist.You can see this picture:
Now I think the problem is not caused by dataloader.
And you say DDP’s memory footprint should be constant,regardless of how many nodes are used.I test the maximum memory footprint as the nodes expand,it just like this
node-memory1366×401 5.92 KB
The memory footprint does not include the cache consumption.
And when I test the memory footprint about 32 nodes,I found a memory allocated error which indicates the memory has run out. |
st177627 | Thanks for the detailed information. This is weird. This is the first time that we saw the memory footprint per machine increases significantly with the total number of machines.
I guess the memory is used to load the disk data.I tried to use a small datasets
which only contains 40 samples(Every sample is 16MB).And in this case,the cache consumption is about 2.5GB.
Q1: This does not add up. The model is 10MB, and the dataset is 40X16MB = 640MB. I assume you do not use CUDA, as you mainly concerned about CPU memory. In this case, I would assume the total memory consumption to be less than 1GB. Do you know where does the other 1.5GB come from?
Q2: Can you try to the following to see if it is indeed used by the process instead of some os cache.
import psutil
process = psutil.Process(os.getpid())
for _ in get_batch():
....
# print this in every iteration
print("Memory used: ", process.memory_info().rss)
Q3: Do you have a minimum reprodueable example that we can investigate locally?
Q4: BTW, how do you launch your distributed training job? Are you using torch.distributed.launch. If so, could you please share the command you are using? |
st177628 | @mrshenli,Sorry,I can not reply you in time.
Now I found the other 1.5GB comes from system.There is 1.5GB consumption before I start this job so I think cache consumption is right.
I use the psutil module to test the process memory footprint.The result just like the following photo:
image1096×265 13.4 KB
(1 node)
image1226×343 16.2 KB
(16 node)
So the process memory footprint per machine does not increase with the total number of machines.However,the total memory footprint per machine increases significantly.
Then I found the process numbers increase significantly with the number of machines.
I set nproc_per_node to 1 so there should be one process in every machine.However,it is not.
There is my command:
bash.sh
MIP=$ip
MPORT=$port
NPROC_PER_NODE=1
HOSTLIST=$hostlist
COMMAND=$HOME/sample.py --epochs=120 --workers=0 --batch-size=1 --print-freq=50 --data=$HOME/datasets/v3
RANK=0
for node in $HOSTLIST; do
echo $node
ssh -q $node \
python -m torch.distributed.launch \
--nproc_per_node=$NPROC_PER_NODE \
--nnodes=$SLURM_JOB_NUM_NODES \
--node_rank=$RANK \
--master_addr="$MIP" --master_port=$MPORT \
$COMMAND > "log_v1_"${SLURM_JOB_ID}"_"${RANK}".out" &
RANK=$((RANK+1))
done
sample.py
....
if __name__ == '__main__':
if torch.distributed.is_available():
dist.init_process_group(backend='gloo',init_method='env://',timeout=datetime.timedelta(seconds=600))
main()
In addition,I do not use CUDA and I train this model in CPUs. |
st177629 | I do not think pytorch is the cause of the mistake.I will check my shell script,thank you @mrshenli |
st177630 | Hi @khalil, have you solve this problem?
I am actually meeting exactly the same problem as you. |
st177631 | Sorry abou the delay, I have solved this problem. The reason is that I have execute the script in same node.(ssh failed) I think you can check to see if the process has started on each node. |
st177632 | Hi Guys!!! I got a very important error!
DDP mode training normal, but when I resume the model , it got OOM. If I am not resume, training normal , the meory is enough.
So the problem is the resume part. But I am simple resume the state dict and I did nothing else. there are some operation do on the first GPU. I dont know why!!!
Here is my resume part of code:
last_weight = os.path.join(
hyp.weight_path, "{}_last.pt".format(model_prefix))
if os.path.exists(last_weight) and hyp.resume:
if rank in [0, -1]:
print('resume from last pth: ', last_weight)
chkpt = torch.load(last_weight)
# using local model to load state_dict, avoid module issue
local_model.load_state_dict(chkpt['model'])
start_epoch = chkpt["epoch"] + 1
if chkpt["optimizer"] is not None:
optimizer.load_state_dict(chkpt["optimizer"])
best_mAP = chkpt["best_mAP"]
del chkpt
else:
if rank in [0, -1]:
print('last pth not found or not resume, skip resume...')
As you can see, the local model is I move to the rank already!!
if hyp.train.sync_batch_norm and rank != -1:
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
if rank == 0:
logger.info('Using SyncBatchNorm()')
logger.info('batch size: {} on single GPU, total batch size: {}'.format(
bs, total_batch_size))
if hyp.distributed:
if rank == 0:
logger.info(
'Enable DDP mode, using all gpus. rank: {}'.format(rank))
dist.init_process_group("nccl", rank=rank, world_size=world_size)
local_model = model.to(rank)
model = torch.nn.parallel.DistributedDataParallel(
local_model, device_ids=[rank], output_device=rank)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset, rank=rank, shuffle=True)
# sampler_test = torch.utils.data.distributed.DistributedSampler(dataset_test)
train_dataloader = DataLoader(
train_dataset, sampler=train_sampler, batch_size=bs, num_workers=hyp.train.num_workers, pin_memory=True)
else:
local_model = model.to(rank)
train_dataloader = DataLoader(
train_dataset, batch_size=bs, num_workers=hyp.train.num_workers, shuffle=True, pin_memory=True)
this is my distributed solving code.
Anyone knows why? I am really appreciated if anyone could help me out!!! |
st177633 | Whilst I tried also using model.load_state_dict rather than local_model.load…
But it was same!! OOM!!!
I am totally don’t know what to do now… |
st177634 | Hi, can you provide a full script (with model definition if possible) to reproduce the OOM issue? Also the stacktrace describing the OOM would be very helpful and help us debug. If there is indeed a reproducible script that produces the issue, feel free to file a bug at https://github.com/pytorch/pytorch/issues. |
st177635 | @rvarm1 Glad for your reply!!
I am currently not able to provide my model defination since it’s a little mess and somehow internal. But I think the main issue is about may train loop, I provide my presudo code here hopefully your professional guys could know what caused this issue:
def train(rank, hyp, world_size):
cuda = torch.cuda.is_available()
if cuda:
torch.cuda.set_device(rank)
start_epoch = 0
best_mAP = 0.0
multi_scale_train = hyp.train.multi_scale
model_prefix = get_model_name(hyp)
if rank in [0, -1]:
if multi_scale_train:
print("Using multi scales training")
else:
print("train img size is {}".format(hyp.train.train_image_size))
if hyp.data.data_format == 'coco':
train_dataset = CocoDataset(
hyp, anno_file_type="train", img_size=hyp.train.train_image_size)
elif hyp.data.data_format == 'fn_anno':
train_dataset = FnAnnoDataset(
hyp, anno_file_type="train", img_size=hyp.train.train_image_size)
elif hyp.data.data_format == 'voc':
ValueError('{} to be supported'.format(hyp.data.data_format))
# this batch size is total which on all cards
bs = hyp.train.batch_size
total_batch_size = hyp.train.batch_size * world_size
epochs = hyp.train.epochs
model = BuildModel(hyp)
if hyp.train.sync_batch_norm and rank != -1:
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
if rank == 0:
logger.info('Using SyncBatchNorm()')
logger.info('batch size: {} on single GPU, total batch size: {}'.format(
bs, total_batch_size))
if hyp.distributed:
if rank == 0:
logger.info(
'Enable DDP mode, using all gpus. rank: {}'.format(rank))
dist.init_process_group("nccl", rank=rank, world_size=world_size)
local_model = model.to(rank)
model = torch.nn.parallel.DistributedDataParallel(
local_model, device_ids=[rank], output_device=rank)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset, rank=rank, shuffle=True)
# sampler_test = torch.utils.data.distributed.DistributedSampler(dataset_test)
train_dataloader = DataLoader(
train_dataset, sampler=train_sampler, batch_size=bs, num_workers=hyp.train.num_workers, pin_memory=True)
else:
local_model = model.to(rank)
train_dataloader = DataLoader(
train_dataset, batch_size=bs, num_workers=hyp.train.num_workers, shuffle=True, pin_memory=True)
if hyp.loss.loss_type == 'yolov4':
criterion = YoloV4Loss(anchors=hyp.model.anchors, strides=hyp.model.strides,
iou_threshold_loss=hyp['train']['iou_threshold_loss']).to(rank)
elif 'yolov5' in hyp.loss.loss_type: # for all yolov5 models
criterion = YoloV5Loss(hyp, anchors=hyp.model.anchors, strides=hyp.model.strides,
iou_threshold_loss=hyp['train']['iou_threshold_loss']).to(rank)
elif hyp.loss.loss_type == 'yolomask':
# to be done this loss
criterion = YoloV4Loss(anchors=hyp.model.anchors, strides=hyp.model.strides,
iou_threshold_loss=hyp['train']['iou_threshold_loss']).to(rank)
else:
ValueError('Unsupported model arch: {}'.format(hyp.model.arch))
# Settings for Optimizer
nbs = 64 # nominal batch size
accumulate = max(round(nbs / total_batch_size), 1)
hyp.train.weight_decay *= total_batch_size * accumulate / nbs
pg0, pg1, pg2 = [], [], [] # optimizer parameter groups
for k, v in model.named_parameters():
v.requires_grad = True
if '.bias' in k:
pg2.append(v) # biases
elif '.weight' in k and '.bn' not in k:
pg1.append(v) # apply weight decay
else:
pg0.append(v) # all else
if hyp.train.optimizer == 'adam':
optimizer = optim.Adam(pg2, lr=hyp.train.lr_init, betas=(
hyp.train.momentum, 0.999)) # adjust beta1 to momentum
else:
optimizer = optim.SGD(pg2, lr=hyp.train.lr_init,
momentum=hyp.train.momentum, nesterov=True)
optimizer.add_param_group(
{'params': pg1, 'weight_decay': hyp.train.weight_decay})
logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' %
(len(pg2), len(pg1), len(pg0)))
del pg0, pg1, pg2
# Settings for lr strategy
# number of warmup iterations, max(3 epochs, 1k iterations)
nw = max(round(hyp.train.warmup_epochs * len(train_dataloader)), 1e3)
def lf(x): return ((1 + math.cos(x * math.pi / epochs)) / 2) * \
(1 - hyp.train.lrf) + hyp.train.lrf # cosine
scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
scheduler.last_epoch = start_epoch - 1
scaler = amp.GradScaler(enabled=cuda)
last_weight = os.path.join(
hyp.weight_path, "{}_last.pt".format(model_prefix))
if os.path.exists(last_weight) and hyp.resume:
if rank in [0, -1]:
print('resume from last pth: ', last_weight)
chkpt = torch.load(last_weight)
# using local model to load state_dict, avoid module issue
local_model.load_state_dict(chkpt['model'])
start_epoch = chkpt["epoch"] + 1
if chkpt["optimizer"] is not None:
optimizer.load_state_dict(chkpt["optimizer"])
best_mAP = chkpt["best_mAP"]
del chkpt
else:
if rank in [0, -1]:
print('last pth not found or not resume, skip resume...')
writer = SummaryWriter(logdir=hyp.log_path + "/event")
if rank == 0:
logger.info("Training start,img size is: {:d}, batchsize is: {:d}, work number is {:d}".format(
hyp.train.train_image_size, hyp.train.batch_size, hyp['train']['num_workers']))
logger.info("Train datasets number is : {}".format(len(train_dataset)))
logger.info('*'*20 + ' start training ' + '*'*20)
if hyp.fp16:
model, optimizer = amp.initialize(
model, optimizer, opt_level="O1", verbosity=0)
# hyp not allowed to change after all set
hyp.freeze()
for epoch in range(start_epoch, epochs):
start = time.time()
model.train()
...
optimizer.zero_grad()
for i, (imgs, label_sbbox, label_mbbox, label_lbbox,
sbboxes, mbboxes, lbboxes) in enumerate(train_dataloader):
...
with amp.autocast(enabled=cuda):
p, p_d = model(imgs)
loss, loss_ciou, loss_conf, loss_cls = criterion(
p, p_d, label_sbbox, label_mbbox, label_lbbox, sbboxes, mbboxes, lbboxes)
# Backward
if hyp.fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
# scaler.scale(loss).backward()
else:
loss.backward()
# Accumulate gradient for x batches before optimizing
if ni % accumulate == 0:
# print('accumulate: ', accumulate)
optimizer.step()
optimizer.zero_grad()
torch.cuda.empty_cache()
def main(hyp):
get_gpu_prop(True)
world_size = get_gpu_devices_count()
print('world size: ', world_size)
if hyp.distributed:
print('Start distributed training...')
mp.spawn(train,
args=(hyp, world_size,),
nprocs=world_size,
join=True)
else:
print('Start single GPU training...')
train(0, hyp, world_size)
if __name__ == "__main__":
parser = argparse.ArgumentParser('YoloV5 written by Me')
parser.add_argument('-c', '--config', type=str, default='configs/tiiii/v4_mbv3.yml',
help='config file path to train.')
parser.add_argument("--resume", action='store_true')
parser.add_argument("--pretrain_path", type=str, default="weights/mobilenetv3.pth",
help="weight file path")
parser.add_argument("--accumulate", type=int, default=2,
help="batches to accumulate before optimizing")
parser.add_argument("--fp16", type=bool, default=False,
help="whither to use fp16 precision")
parser.add_argument("opts", default=None,
nargs=argparse.REMAINDER, help="rest options")
opt = parser.parse_args()
cfg.merge_from_file(opt.config)
cfg.merge_from_list(opt.opts)
hyp = cfg
os.makedirs(hyp.weight_path, exist_ok=True)
os.makedirs(hyp.log_path, exist_ok=True)
if get_gpu_devices_count() > 1:
hyp.distributed = True
# Automatic mixed precision
hyp.amp = False
if torch.cuda.is_available() and torch.__version__ >= "1.6.0":
capability = torch.cuda.get_device_capability()[0]
if capability >= 7: # 7 refers to RTX series GPUs, e.g. 2080Ti, 2080, Titan RTX
hyp.amp = True
print("Automatic mixed precision (AMP) is enabled!")
main(hyp)
The mainly issue is that: training from scratch with 8 GPUs, it averages perfectly on memory, but when resume which only difference is the load weight part, got GPU imbalance and casuse first GPU OOM.
As you can see, I donn’t how to solve this issue now. |
st177636 | Hey @jinfagang
Are you asking rank 0 to save the model and then all ranks to load from that checkpoint? If so, you might need to provide a map_location arg when calling torch.load, otherwise, it might load to the device where it was saved. If this still doesn’t fix the problem, I would try first move the model to CPU, and then save it. When loading, always load it to CPU on all ranks, and then explicitly move it to the destination device. |
st177637 | Logging prints nothing in the following code:
#!/usr/bin/python
# -*- coding: UTF-8 -*-
from __future__ import absolute_import, division, print_function, unicode_literals
import os, logging
#logging.basicConfig(level=logging.DEBUG)
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# Initialize the process group.
dist.init_process_group('NCCL', rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic(rank, world_size):
setup(rank, world_size)
logger = logging.getLogger('train')
logger.setLevel(logging.DEBUG)
logger.info(f'Running DPP on rank={rank}.')
# Create model and move it to GPU.
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) # optimizer takes DDP model.
optimizer.zero_grad()
inputs = torch.randn(20, 10) # .to(rank)
outputs = ddp_model(inputs)
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_func, world_size):
mp.spawn(
demo_func,
args=(world_size,),
nprocs=world_size,
join=True
)
def main():
run_demo(demo_basic, 4)
if __name__ == "__main__":
main()
However, when we uncomment the 6th line, the logging works. May I know the reason and how to fix the bug please? |
st177638 | Solved by agolynski in post #2
Hi,
It doesn’t seem to be related to DDP or pytorch, but to how logging module is setup. If you remove all the torch code, you would still get the same result.
def main():
logger = logging.getLogger('train')
logger.setLevel(logging.DEBUG)
logger.info(f'in main.')
Does it block you in … |
st177639 | Hi,
It doesn’t seem to be related to DDP or pytorch, but to how logging module is setup. If you remove all the torch code, you would still get the same result.
def main():
logger = logging.getLogger('train')
logger.setLevel(logging.DEBUG)
logger.info(f'in main.')
Does it block you in any way? |
st177640 | Hi @agolynski, thank you so much for your kind reply. I have adjusted my code and found that the logger works very well if it is created inside the process of DDP but fails again if it was fed as argument. The following snippets can evidence my statement:
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# Initialize the process group.
dist.init_process_group('NCCL', rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic(rank, world_size, logger=None):
setup(rank, world_size)
if rank == 0:
logger = get_logger() if logger is None else logger
logger.info(f'info in process')
logger.error(f'error in process.')
# Create model and move it to GPU.
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) # optimizer takes DDP model.
optimizer.zero_grad()
inputs = torch.randn(20, 10) # .to(rank)
outputs = ddp_model(inputs)
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_func, world_size):
logger = get_logger()
# logger = None # Created from inside.
mp.spawn(
demo_func,
args=(world_size, logger),
nprocs=world_size,
join=True
)
def get_logger():
logger = logging.getLogger('train')
# Handlers.
logger.addHandler(
logging.StreamHandler()
)
logger.setLevel(logging.DEBUG)
return logger
def example2():
run_demo(demo_basic, 4)
def main():
example2()
if __name__ == "__main__":
main()
If the code in line 54 is commented (as above), there is no “info in process”. However, if we commented line 53 and uncommented line 54, we can see “info in process” in the output.
It does not block me, but I am quite curious why it happens, I thought DDP is essentially a wrapper of process. |
st177641 | Recentelly I find nn.Module cannot set some objects as its attributes.for example
class Example(nn.Module):
def set(self):
# seq is instance of nn.Sequential
list = [seq1,seq2,seq3]
self.__setattr__('exp',list)
when using __getattr__, it will return
AttributeError: ‘Example’ object has no attribute ‘exp’
I figure it out by replacing list with nn.ModuleList but what is the purpose? |
st177642 | Solved by ptrblck in post #2
Using an nn.ModuleList will make sure that all parameters are properly transferred to the device, if you are using model.to() or a data parallel approach.
The attribute should still be found using a plain list object, but should yield a device mismatch error.
Are you trying to create these modules… |
st177643 | Using an nn.ModuleList will make sure that all parameters are properly transferred to the device, if you are using model.to() or a data parallel approach.
The attribute should still be found using a plain list object, but should yield a device mismatch error.
Are you trying to create these modules during a forward pass and data parallel?
If so, note that these changes would be applied to each model copy on the device, not the main model. |
st177644 | Thank for your answering and the hint. But what still confuse me is why cannot post a list into model.Any reason for this? |
st177645 | I would assume the list should also be registered as an attribute, but should yield a device mismatch error. I still don’t know, why the attribute cannot be found at all.
Are you registering it after wrapping the model into a data parallel wrapper? |
st177646 | Sorry for misunderstanding your previous advice before. But no, I register the attribute before warpping the model into data parallel. Here’s my whole script
import torch
import torch.nn as nn
class Example(nn.Module):
def set(self):
list = [1,2,3]
self.__setattr__('exp',list)
def set_torch(self):
conv1 = nn.Conv2d(128,256,3)
self.__setattr__('exp1',nn.ModuleList([conv1,conv1,conv1]))
example = Example()
example.set()
example.set_torch()
example.__getattr__('exp')
example.__getattr__('exp1')
and I get
>>example.__getattr__('exp')
Traceback (most recent call last):
File "E:\pycharm\PyCharm Community Edition 2020.1.3\plugins\python-ce\helpers\pydev\_pydevd_bundle\pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
File "C:\Users\acer\AppData\Roaming\Python\Python36\site-packages\torch\nn\modules\module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'Example' object has no attribute 'exp'
>>example.__getattr__('exp1')
ModuleList(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))
(2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))
) |
st177647 | Thanks for the code. It seems you can directly access the attribute via example.exp, but example.__getattr_ calls into this derived method 18, which checks for parameters, buffers, and modules. |
st177648 | Hi,
I tried to run 2 training jobs on a GPU cluster with 8 GPUs, both jobs need to use NCCL AllReduce. And I notice the training speed is slower if running the 2 jobs at same time than running them separately. Is this because they are competing for the bandwidth between GPU communications (the AllReduce call)? Thanks. |
st177649 | Hi @Yi_Zhang Yes, I would think that this is likely the case and running multiple training jobs will result in more competition for the GPU’s bandwidth and thus overall a slower performance compared to running either job individually.
Are you noticing any extreme slowness/hangs that may be more indicative of a bug? |
st177650 | Hi @rvarm1, thanks for reply. I’m not sure if it is a bug, since the program doesn’t hang, but it affects the speed greatly. And if I reduce the sync frequency, it can help to speed up, so I feel the communication between GPUs may be a bottleneck. |
st177651 | I am trying to implement a very basic version of the “Asynchronous one-step Q-learning” 1 (page 3). I therefore need to train a neural network simultaneously on several processes (or threads, not sure yet).
The different process needs to use the same optimizer. There is a local network and a target network that gets updated every N steps (in my small code it gets updated but not used for simplicity sakes).
The overall system uses the Hogwild! methods, so there is in theory no need to do much locking from what I have understand
This is my small snippet to try to understand how I can implement these mechanics:
import torch
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
INPUT_DIMENSION = 10
OUTPUT_DIMENSION = 4
OPTIMIZER_STEP_FREQUENCY = 10
UPDATE_TARGET_NETWORK_FREQUENCY = 20
class Worker:
def __init__(self, online_network, target_network, optimizer):
self.optimizer = optimizer
self.online_network = online_network
self.target_network = target_network
def run(self, global_step, num_steps):
for i in range(num_steps):
with global_step.get_lock():
global_step.value += 1
data = torch.ones((1, INPUT_DIMENSION))
prediction = self.online_network(data)
target = -torch.ones((1, OUTPUT_DIMENSION))
loss = nn.MSELoss()(prediction, target)
loss.backward()
if i % OPTIMIZER_STEP_FREQUENCY == 0:
self.optimizer_step()
if i % UPDATE_TARGET_NETWORK_FREQUENCY == 0:
self.update_target_network()
def optimizer_step(self):
self.optimizer.step()
self.optimizer.zero_grad()
def update_target_network(self):
self.target_network.load_state_dict(self.online_network.state_dict())
if __name__ == '__main__':
online_network = nn.Linear(INPUT_DIMENSION, OUTPUT_DIMENSION)
online_network.share_memory()
target_network = nn.Linear(INPUT_DIMENSION, OUTPUT_DIMENSION)
target_network.load_state_dict(online_network.state_dict())
target_network.share_memory().eval()
global_step = mp.Value('i', 0)
optimizer = optim.SGD(online_network.parameters(), lr=0.005)
num_processes = 4
num_steps_per_worker = 30
processes = []
for rank in range(num_processes):
p = mp.Process(target=Worker(online_network, target_network, optimizer).run,
args=(global_step, num_steps_per_worker,))
p.start()
processes.append(p)
for p in processes:
p.join()
print(global_step.value)
print(online_network(torch.ones((1, INPUT_DIMENSION))).tolist())
I wanted to know if my way of handling the different variables and networks is okay. I am new to multiprocessing and I am not sure if what I am doing is “good practice”.
Also I saw on repositories 3 that a custom function is used where the optimizer is “wrapped” to share it. Should I use such class for my application ? Is there a better way to do that (In the newer versions of Pytorch) ?
Thanks! |
st177652 | Hi,
In general the multiprocessing setup looks good to me.
It looks like on the link you mentioned, the authors have implemented ShareAdam: https://github.com/g6ling/Reinforcement-Learning-Pytorch-Cartpole/blob/master/parallel/1-Async-Q-Learning/shared_adam.py 14 to share gradients across processes. If your use case requires this too, this is probably a good approach, as PyTorch does not currently natively support sharing gradients across processes in optimizers. |
st177653 | Thanks a lot for the feedback!
I’m seeing that you are quite experienced with distributed system; Do you know how much the optimizer cancel each other e.g. If one optimizer does a step and is reset, does it also cancel the accumulation of the other optimizers ? I saw a thread about that and I am quite curious about it
Also, what do you think is the best way to retrieve the global network parameters:
using .share_memory() and retrieve it from time to time, or using a multiprocessing.Manager() like in this implementation 2 ?
mp_manager = mp.Manager()
shared_state = mp_manager.dict()
shared_state["Q_network_state_dict"] = q_network.state_dict() |
st177654 | Hi, I am trying to make my code work with torch.nn.parallel.DistributedDataParallel. However in my code I need to load some files which gives me the error: AttributeError: Can't pickle local object.
I guess the problem is the processes trying to access the data concurrently? Is there any way to make this work? |
st177655 | I’m not clear on how this is an issue with DistributedDataParallel which does not in itself do any pickling. Could you share a minimal script that reproduces the issue you’re seeing so we can debug further? Thanks! |
st177656 | Hi @rvarm1 thanks for your answer, again An example would be using tensorboard, as I mentioned in another post: Using tensorboard with DistributedDataParallel 20 |
st177657 | Hello there, I am student and we could say beginner to the topic of machine learning.
I am currently looking into problematic of parallel training on multiple GPUs. I understand DataParallel, but cant make Distributed Data Parallel works.
The part I dont understand is communication through backend and connecting two nodes, for example do they need to be on same cluster? Or is static IP enough for master node. Can I somehow use public IP, and how?
My question really is, if you could provide me some good sources of knowledge, tutorials, etc.
I would be really grateful.
Lukas |
st177658 | Solved by rvarm1 in post #2
Hi,
To concretely answer your question around communication, basically nodes need some way to discover each other, whether that is through a shared-filesystem approach, or through a main IP address that every node can talk to.
Here are some helpful tutorials around PyTorch distributed and DDP:
Py… |
st177659 | Hi,
To concretely answer your question around communication, basically nodes need some way to discover each other, whether that is through a shared-filesystem approach, or through a main IP address that every node can talk to.
Here are some helpful tutorials around PyTorch distributed and DDP:
PyTorch DDP tutorial: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html 10
PyTorch distributed overview: https://pytorch.org/tutorials/beginner/dist_overview.html 3 |
st177660 | How can I inference model under distributed data parallel?
I want to gather all predictions to calculate metrics and write result in one file. |
st177661 | Hi, At a high level, after training your model with DDP, you can save its state_dict to a path and load a local model from that state_dict using load_state_dict . You can find full documentation here: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#save-and-load-checkpoints 88. |
st177662 | When I use DataParalle, I find the first dim of outputs is batch_size * gpu_nums, and when I calculate the loss, the error coming
ValueError: Expected input batch_size (32) to match target batch_size (16).
my code is:
model = DataParallel(model, device_ids=gpus, output_device=gpus[0])
model.to(config.cuda_id)
outputs = model(input_ids, token_type_ids, attention_mask)
loss = loss_fct(outputs, labels.cuda(config.cuda_id))
I think the first dim of outputs should be batch_size
How to fix this, anybody can help me? thx |
st177663 | Thanks! Checking some other sources: ValueError: Expected input batch_size (1) to match target batch_size (64) 8 and https://stackoverflow.com/questions/56719867/pytorch-expected-input-batch-size-12-to-match-target-batch-size-64 5, and ValueError: Expected input batch_size (324) to match target batch_size (4) 5, there is likely a bug in how you’ve defined the shapes in the implementation of your forward pass.
If your model works without DataParallel but breaks with it, it’s likely due to your model implicitly hardcoding a specific batch size it expects, likely in the beginning of the forward pass (maybe somwhere in self.bert(). |
st177664 | I’ve been reading the documents official provided these days about distributed training. I tried to use mp.spawn and torch.distributed.launch to start training. I found that using mp.spawn is slower than torch.distributed.launch, mainly in the early stage of each epoch data read.
For example, when using torch.distributed.launch, it only takes 8 seconds to train every epoch. When using mp.spawn, it takes 17 seconds to train every epoch, of which the first 9 seconds have been waiting (GPU util is 0%).
And I found that when using torch.distributed.launch, I can see multiple processes through ps -ef | grep train_multi instruction, but when I use mp.spawn, I can only see one processes.
I don’t know if I use it incorrectly. I hope I can get your advice. Looking forward to your reply.
environments:
OS: Centos7
Python: 3.6
Pytorch: 1.7 GPU
CUDA: 10.1
GPU: Tesla V100
these code are from https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/blob/master/pytorch_classification/train_multi_GPU/train_multi_gpu_using_spawn.py 2
import os
import math
import tempfile
import argparse
import torch
import torch.multiprocessing as mp
import torch.optim as optim
import torch.optim.lr_scheduler as lr_scheduler
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms
from model import resnet34
from my_dataset import MyDataSet
from utils import read_split_data, plot_data_loader_image
from multi_train_utils.distributed_utils import dist, cleanup
from multi_train_utils.train_eval_utils import train_one_epoch, evaluate
def main_fun(rank, world_size, args):
if torch.cuda.is_available() is False:
raise EnvironmentError("not find GPU device for training.")
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
args.rank = rank
args.world_size = world_size
args.gpu = rank
args.distributed = True
torch.cuda.set_device(args.gpu)
args.dist_backend = 'nccl'
print('| distributed init (rank {}): {}'.format(
args.rank, args.dist_url), flush=True)
dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
world_size=args.world_size, rank=args.rank)
dist.barrier()
rank = args.rank
device = torch.device(args.device)
batch_size = args.batch_size
num_classes = args.num_classes
weights_path = args.weights
args.lr *= args.world_size
if rank == 0:
print(args)
print('Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/')
tb_writer = SummaryWriter()
if os.path.exists("./weights") is False:
os.makedirs("./weights")
train_images_path, train_images_label, val_images_path, val_images_label = read_split_data(args.data_path)
data_transform = {
"train": transforms.Compose([transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),
"val": transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])}
train_data_set = MyDataSet(images_path=train_images_path,
images_class=train_images_label,
transform=data_transform["train"])
val_data_set = MyDataSet(images_path=val_images_path,
images_class=val_images_label,
transform=data_transform["val"])
train_sampler = torch.utils.data.distributed.DistributedSampler(train_data_set)
val_sampler = torch.utils.data.distributed.DistributedSampler(val_data_set)
train_batch_sampler = torch.utils.data.BatchSampler(
train_sampler, batch_size, drop_last=True)
nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8]) # number of workers
if rank == 0:
print('Using {} dataloader workers every process'.format(nw))
train_loader = torch.utils.data.DataLoader(train_data_set,
batch_sampler=train_batch_sampler,
pin_memory=True,
num_workers=nw,
collate_fn=train_data_set.collate_fn)
val_loader = torch.utils.data.DataLoader(val_data_set,
batch_size=batch_size,
sampler=val_sampler,
pin_memory=True,
num_workers=nw,
collate_fn=val_data_set.collate_fn)
model = resnet34(num_classes=num_classes).to(device)
if os.path.exists(weights_path):
weights_dict = torch.load(weights_path, map_location=device)
load_weights_dict = {k: v for k, v in weights_dict.items()
if model.state_dict()[k].numel() == v.numel()}
model.load_state_dict(load_weights_dict, strict=False)
else:
checkpoint_path = os.path.join(tempfile.gettempdir(), "initial_weights.pt")
if rank == 0:
torch.save(model.state_dict(), checkpoint_path)
dist.barrier()
model.load_state_dict(torch.load(checkpoint_path, map_location=device))
if args.freeze_layers:
for name, para in model.named_parameters():
if "fc" not in name:
para.requires_grad_(False)
else:
if args.syncBN:
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
# optimizer
pg = [p for p in model.parameters() if p.requires_grad]
optimizer = optim.SGD(pg, lr=args.lr, momentum=0.9, weight_decay=0.005)
# Scheduler https://arxiv.org/pdf/1812.01187.pdf
lf = lambda x: ((1 + math.cos(x * math.pi / args.epochs)) / 2) * (1 - args.lrf) + args.lrf # cosine
scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
for epoch in range(args.epochs):
train_sampler.set_epoch(epoch)
mean_loss = train_one_epoch(model=model,
optimizer=optimizer,
data_loader=train_loader,
device=device,
epoch=epoch)
scheduler.step()
sum_num = evaluate(model=model,
data_loader=val_loader,
device=device)
acc = sum_num / val_sampler.total_size
if rank == 0:
print("[epoch {}] accuracy: {}".format(epoch, round(acc, 3)))
tags = ["loss", "accuracy", "learning_rate"]
tb_writer.add_scalar(tags[0], mean_loss, epoch)
tb_writer.add_scalar(tags[1], acc, epoch)
tb_writer.add_scalar(tags[2], optimizer.param_groups[0]["lr"], epoch)
torch.save(model.state_dict(), "./weights/model-{}.pth".format(epoch))
if rank == 0:
if os.path.exists(checkpoint_path) is True:
os.remove(checkpoint_path)
cleanup()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--num_classes', type=int, default=5)
parser.add_argument('--epochs', type=int, default=30)
parser.add_argument('--batch-size', type=int, default=16)
parser.add_argument('--lr', type=float, default=0.001)
parser.add_argument('--lrf', type=float, default=0.1)
parser.add_argument('--syncBN', type=bool, default=True)
# http://download.tensorflow.org/example_images/flower_photos.tgz
parser.add_argument('--data-path', type=str, default="/home/wz/data_set/flower_data/flower_photos")
# https://download.pytorch.org/models/resnet34-333f7ec4.pth
parser.add_argument('--weights', type=str, default='resNet34.pth',
help='initial weights path')
parser.add_argument('--freeze-layers', type=bool, default=False)
parser.add_argument('--device', default='cuda', help='device id (i.e. 0 or 0,1 or cpu)')
parser.add_argument('--world-size', default=4, type=int,
help='number of distributed processes')
parser.add_argument('--dist-url', default='env://', help='url used to set up distributed training')
opt = parser.parse_args()
mp.spawn(main_fun,
args=(opt.world_size, opt),
nprocs=opt.world_size,
join=True) |
st177665 | Addressing this as a GH issue: https://github.com/pytorch/pytorch/issues/47587 111 |
st177666 | image1413×215 36.5 KB
here is my mode, my problem is when i only load the pretrained params in rank=0 process, the other params such as ‘acc_max’ in checkpoint can’t be sysc cross the process,it seems only sysc the model weights cross ranks, so what should i do to make sure the all kinds of params saved in checkpoint will be loaded on all process.
(ps: if i remove the ‘args.local_rank==0’ judge, i watch the gpustats and see two process running on the gpu0. That case is not my desired. )
thanks bro! |
st177667 | Hi,
As I understand it you’d like to broadcast the value of acc_max to all ranks from rank 0?
In that case, you can simply convert it to a tensor and call dist.broadcast:
acc_max = checkpoint['acc_max']
acc_tensor = torch.zeros(1) if args.local_rank != 0 else torch.tensor([acc_max])
torch.distributed.broadcast(acc_tensor)
acc_max_from_rank_0 = acc_tensor.item()
pytorch.org
Distributed communication package - torch.distributed — PyTorch 1.7.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.