id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st177768 | Solved by mrshenli in post #2
Hey @sakh251
1- The data should be sent to the parameter server (for a large dataset is it possible???)
2- it Async
3- distributed auto grad has been used
am I correct?
Yep, you are correct.
I am wondering is there any way to implement a sync parameter server with the RPC framework without … |
st177769 | Hey @sakh251
1- The data should be sent to the parameter server (for a large dataset is it possible???)
2- it Async
3- distributed auto grad has been used
am I correct?
Yep, you are correct.
I am wondering is there any way to implement a sync parameter server with the RPC framework without moving data to the parameter server? I mean just use local grad and for example, model averaging on parameter server. any exist?
Yep, this is possible. Check out this tutorial 9. This is not exactly what you are looking for, as the trainers send gradients to the parameter server and the parameter server batch updates the model. But it shouldn’t be too far from your requirements. Editing the batch-processing batch_update_size and then send model params instead of gradients might do the job. |
st177770 | a simple MLP model was considered and the initial model() shared between workers.
Conventionally, each worker train model based on its own dataset and shared the resulted model. By averaging models from different workers, the new model achieved for the next round of algorithms.
How efficiently share gradients of different workers and averaged them and use the “optimizer .step()” to update the initial model(initial model shared among workers)? |
st177771 | hey @Ohm, if you are using DistributedDataParallel, you can try the no_sync 3 context manager. For example, you can wrap local training iterations with no_sync. When you need to do gradient averaging, just run one fw-bw out of the no_sync context, and DDP should be able to take care of the gradient synchronization.
Another option would be building your application using torch.distributed.rpc 1 and then use a parameter server to sync models. See this tutorial 3.
If all parameters are dense, the DDP solution should be more efficient. If there are sparse parameters, the parameter server solution might be faster. |
st177772 | Hello,
I am reviewing the pytorch imagenet 10 example in the repos and I have trouble comprehending the loss value that is returned by the criterion module. In Line 291 14, is the loss that is recorded later for only one process? Is summing and averaging all losses across all processes using ReduceOp.SUM a better alternative? For example, when I want to save my model or simply log the metric, I would like to do it based on the average loss values across all processes.
In other words, is losses.update(loss.item(), images.size(0)) only saving the loss value for one process? |
st177773 | amirhf:
is the loss that is recorded later for only one process?
Hey @amirhf, if you are using DistributedDataParallel, yep, this is the local loss within one process. And different process can have different loss values.
Is summing and averaging all losses across all processes using ReduceOp.SUM a better alternative?
This will give you the global loss, but will also introduce one more communication per iteration. If this is just for logging purpose, will it be sufficient if the logging is done every n iterations, so that there will be a smaller amortized comm overhead? |
st177774 | Hi @mrshenli,
Thank you for your response! So the concern is that the reduce operation is an overhead. Yes, so I would just log the global loss every few iterations. Something along the lines of if iteration % 200 == 0: reduce and log. Is that going to be okay? |
st177775 | Hi @amirhf,
I synchronize the loss record at every epoch, which is about 100 ~ 1500 iterations varying by the dataset I use. I didn’t see much performance degradation with that. The gains in DistributedDataParallel was bigger (compared to DataParallel) than the loss in the GPU communication time in my experience.
You can refer to my code here 43. |
st177776 | Yep, that should be infrequent enough.
There are also ways to hide this communication delay by setting async_op=True when launching the dist.reduce, and only wait on the returned handle after the next forward pass. This would allow the communication of dist.reduce to overlap with the next fw pass. |
st177777 | Suppose I have a machine with 8 GPUs and 64 CPUs.
Using DistributedDataParallel, it will run 8 processes and each process uses a single GPU. I’m wondering how about CPUs? Are they evenly distributed across the 8 processes? Can we specify which process use how many CPUs? |
st177778 | Solved by ptrblck in post #3
I’ve seen approaches to set the CPU affinity for a GPU device using nvml as described here.
However, I don’t know if and how this approach would work for a general PyTorch process and if you would benefit from it. |
st177779 | Hey @zzzf, DDP does not do any specific thing to allocate CPUs.
cc @ptrblck is it possible to pin CPU affinity for a PyTorch process |
st177780 | I’ve seen approaches to set the CPU affinity for a GPU device using nvml as described here 16.
However, I don’t know if and how this approach would work for a general PyTorch process and if you would benefit from it. |
st177781 | @pritamdamania87 also suggested os.sched_setaffinity .
@zzzf please let us know if these solutions would help. Thx! |
st177782 | Hi Shen, could you please also take a look at my latest post which is relevant to your tutorial: DistributedDataParallel: model weights and grads not synchronized with multiple forward backward pass 8? Thanks! |
st177783 | yep, commented there.
DDP was supposed to be used with alternating fw and bw passes. I am a little surprised that it didn’t throw any error. Please let us know the version of PyTorch you are using, we might have recently accidentally disabled the check for some code paths. |
st177784 | NCCL error happens when I try to run a job on 3 nodes. Everything works fine when running on a single node.
My launching command is:
/usr/local/bin/mpirun --hostfile /var/storage/shared/resrchvc/sys/jobs/application_1602032654055_58426/scratch/1/mpi-hosts --tag-output -x NCCL_IB_DISABLE=1 -np 4 -map-by node -bind-to none -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH -x PT_OUTPUT_DIR -x PT_DATA_DIR -x PT_LOGS_DIR -x PT_CODE_DIR -x PYTHONBREAKPOINT -x NCCL_IB_DISABLE=0 -x NCCL_IB_HCA=mlx5_0,mlx5_2 -x NCCL_SOCKET_IFNAME=ib0 mmf_run config=projects/visual_bert/configs/localized_narratives/pretrain.yaml model=visual_bert dataset=masked_localized_narratives run_type=train env.cache_dir=/mnt/default/mmf_cache env.save_dir=/mnt/output/projects/mmf/ln_mask_pretrain_experiment/pt-results/application_1602032654055_58426 env.log_dir=/mnt/output/projects/mmf/ln_mask_pretrain_experiment/logs/application_1602032654055_58426 env.data_dir=/mnt/default/mmf_cache/data
The detailed debug info is shown below. Any idea on how to tackle this problem?
........
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_2
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_2
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_0
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_0
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_3
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_3
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_1
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_1
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] NCCL INFO NET/IB : No device found.
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] NCCL INFO NET/IB : No device found.
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:265:265 [4] NCCL INFO NET/Socket : Using [0]ib0:192.168.33.19<0>
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:264 [3] NCCL INFO NET/Socket : Using [0]ib0:192.168.33.19<0>
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:266 [5] NCCL INFO NET/Socket : Using [0]ib0:192.168.33.19<0>
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] NCCL INFO Bootstrap : Using [0]ib0:192.168.33.19<0>
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] NCCL INFO NCCL_IB_DISABLE set by environment to 0.
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_2
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_0
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_3
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed
[1,3]<stdout>:
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] transport/net_ib.cc:117 NCCL WARN NET/IB : Unable to open device mlx5_1
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] NCCL INFO NET/IB : No device found.
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:263 [2] NCCL INFO NET/Socket : Using [0]ib0:192.168.33.19<0>
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:261:551 [0] NCCL INFO Setting affinity for GPU 0 to 1fc07f
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:558 [2] NCCL INFO Setting affinity for GPU 2 to 1fc07f
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:262:553 [1] NCCL INFO Setting affinity for GPU 1 to 1fc07f
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:557 [5] NCCL INFO Setting affinity for GPU 5 to 0fe03f80
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:267:552 [6] NCCL INFO Setting affinity for GPU 6 to 0fe03f80
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:265:555 [4] NCCL INFO Setting affinity for GPU 4 to 0fe03f80
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:556 [3] NCCL INFO Setting affinity for GPU 3 to 1fc07f
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:269:554 [7] NCCL INFO Setting affinity for GPU 7 to 0fe03f80
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:261:551 [0] NCCL INFO Channel 00 : 0 1 2 3 4 5 6 7
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:265:555 [4] NCCL INFO Ring 00 : 4[4] -> 5[5] via P2P/IPC
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:261:551 [0] NCCL INFO Ring 00 : 0[0] -> 1[1] via P2P/IPC
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:269:554 [7] NCCL INFO Ring 00 : 7[7] -> 0[0] via direct shared memory
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:556 [3] NCCL INFO Ring 00 : 3[3] -> 4[4] via direct shared memory
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:557 [5] NCCL INFO Ring 00 : 5[5] -> 6[6] via P2P/IPC
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:267:552 [6] NCCL INFO Ring 00 : 6[6] -> 7[7] via P2P/IPC
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:262:553 [1] NCCL INFO Ring 00 : 1[1] -> 2[2] via P2P/IPC
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:558 [2] NCCL INFO Ring 00 : 2[2] -> 3[3] via P2P/IPC
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:265:555 [4] NCCL INFO comm 0x7f5da4002600 rank 4 nranks 8 cudaDev 4 nvmlDev 4 - Init COMPLETE
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:261:551 [0] NCCL INFO Using 256 threads, Min Comp Cap 7, Trees disabled
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:266:557 [5] NCCL INFO comm 0x7fdb60002600 rank 5 nranks 8 cudaDev 5 nvmlDev 5 - Init COMPLETE
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:261:551 [0] NCCL INFO comm 0x7f1f34002600 rank 0 nranks 8 cudaDev 0 nvmlDev 0 - Init COMPLETE
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:261:261 [0] NCCL INFO Launch mode Parallel
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:269:554 [7] NCCL INFO comm 0x7f656c002600 rank 7 nranks 8 cudaDev 7 nvmlDev 7 - Init COMPLETE
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:264:556 [3] NCCL INFO comm 0x7f7dcc002600 rank 3 nranks 8 cudaDev 3 nvmlDev 3 - Init COMPLETE
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:267:552 [6] NCCL INFO comm 0x7f2b18002600 rank 6 nranks 8 cudaDev 6 nvmlDev 6 - Init COMPLETE
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:262:553 [1] NCCL INFO comm 0x7fa314002600 rank 1 nranks 8 cudaDev 1 nvmlDev 1 - Init COMPLETE
[1,3]<stdout>:container-e2250-1602032654055-58426-01-000008:263:558 [2] NCCL INFO comm 0x7fe718002600 rank 2 nranks 8 cudaDev 2 nvmlDev 2 - Init COMPLETE
[1,3]<stdout>:e[32m2020-10-22T19:32:39 | mmf: e[0mLogging to: /mnt/output/projects/mmf/ln_mask_pretrain_experiment/pt-results/application_1602032654055_58426/train.log
[1,3]<stdout>:e[32m2020-10-22T19:32:39 | mmf_cli.run: e[0mNamespace(config_override=None, local_rank=None, opts=['config=projects/visual_bert/configs/localized_narratives/pretrain.yaml', 'model=visual_bert', 'dataset=masked_localized_narratives', 'run_type=train', 'env.cache_dir=/mnt/default/mmf_cache', 'env.save_dir=/mnt/output/projects/mmf/ln_mask_pretrain_experiment/pt-results/application_1602032654055_58426', 'env.log_dir=/mnt/output/projects/mmf/ln_mask_pretrain_experiment/logs/application_1602032654055_58426', 'env.data_dir=/mnt/default/mmf_cache/data'])
[1,3]<stdout>:e[32m2020-10-22T19:32:39 | mmf_cli.run: e[0mTorch version: 1.6.0+cu101
[1,3]<stdout>:e[32m2020-10-22T19:32:39 | mmf.utils.general: e[0mCUDA Device 0 is: Tesla V100-PCIE-32GB
[1,3]<stdout>:e[32m2020-10-22T19:32:39 | mmf_cli.run: e[0mUsing seed 39811694
[1,3]<stdout>:e[32m2020-10-22T19:32:39 | mmf.trainers.mmf_trainer: e[0mLoading datasets
[1,1]<stderr>:Traceback (most recent call last):
[1,1]<stderr>: File "/home/v-kunyan/.local/bin/mmf_run", line 8, in <module>
[1,1]<stderr>: sys.exit(run())
[1,1]<stderr>: File "/home/v-kunyan/.local/lib/python3.7/site-packages/mmf_cli/run.py", line 118, in run
[1,1]<stderr>: nprocs=config.distributed.world_size,
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 200, in spawn
[1,1]<stderr>: return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
[1,1]<stderr>: while not context.join():
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 119, in join
[1,1]<stderr>: raise Exception(msg)
[1,1]<stderr>:Exception:
[1,1]<stderr>:
[1,1]<stderr>:-- Process 0 terminated with the following error:
[1,1]<stderr>:Traceback (most recent call last):
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
[1,1]<stderr>: fn(i, *args)
[1,1]<stderr>: File "/home/v-kunyan/.local/lib/python3.7/site-packages/mmf_cli/run.py", line 66, in distributed_main
[1,1]<stderr>: main(configuration, init_distributed=True, predict=predict)
[1,1]<stderr>: File "/home/v-kunyan/.local/lib/python3.7/site-packages/mmf_cli/run.py", line 33, in main
[1,1]<stderr>: distributed_init(config)
[1,1]<stderr>: File "/home/v-kunyan/.local/lib/python3.7/site-packages/mmf/utils/distributed.py", line 244, in distributed_init
[1,1]<stderr>: dist.all_reduce(torch.zeros(1).cuda())
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 936, in all_reduce
[1,1]<stderr>: work = _default_pg.allreduce([tensor], opts)
[1,1]<stderr>:RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:492, internal error, NCCL version 2.4.8
[1,1]<stderr>:
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[39364,1],1]
Exit code: 1 |
st177785 | Typically this indicates an error in the NCCL library itself (not at the PyTorch layer), and as a result we don’t have much visibility into the cause of this error, unfortunately. Is this error consistent, or does the training work if you re-run? Are you using 3 nodes with 8 gpus each? |
st177786 | Yes, I’m using 3 nodes with 8 GPUs each and this error can be reproduced every time. |
st177787 | This sounds like a setup or NCCL issues, so you could install the NCCL test 129 and check, if the mpi workload is working properly in your setup. |
st177788 | The nccl test output is as follows:
image1148×673 99.4 KB
Does it mean that the nccl setup is well done?
By the way, I’ve noticed the nccl version in my docker image is 2.7.8, but the runtime error says NCCL version is 2.4.8. It seems that PyTorch has another version installed internally, will the version mismatch lead to an error?
Thank you all for your time! |
st177789 | The NCCL submodule was updated to 2.7.8 approx. a month ago, so you could use the nightly binary to use the same version (which seems to work in your setup) or test 2.4.8 in the container. |
st177790 | Hello guys,
I would like to do parallel evaluation of my models on multiple GPUs. I don’t have much experience using python and pytorch this way. Here is a pseudocode of what I’m trying to do:
import torch
import torch.multiprocessing as mp
from mycnn import CNN
from data_parser import parser
from fitness import get_fitness # this also runs on GPU
def run_model(outputs, model, device_id, input):
out = model(input)
f = get_fitness(out) # due to this I cannot just run: model(input, non_blocking=True)
outputs[device_id] = f.cpu()
if __name__ == '__main__':
batch = parser.get_batch()
model = CNN()
GPU_NUM = 2
outputs = torch.zeros(GPU_NUM, dtype=torch.double) # collect outputs here
outputs.share_memory_() # I guess this is not enough to make it work
mp.set_start_method('spawn')
processes = []
for dev_id in (range(GPU_NUM)):
device = torch.device("cuda:" + str(dev_id))
dev_model = model.to(device)
dev_batch = batch.to(device)
p = mp.Process(target=run_model, args=(outputs, dev_model, dev_id, dev_batch))
p.start()
processes.append(p)
for p in processes:
p.join()
print(outputs)
Sadly this doesn’t work at all and I’m probably doing it completely wrong. I’m getting this error:
OSError: [Errno 12] Cannot allocate memory
For some reason it drains all memory on my server, even when GPU_NUM = 1. When I run code synchronously I get no errors. Could you please tell me what is the right way to do something like this or give me link to some examples, which would help me? |
st177791 | If you have multiple GPUs and want to evaluate each model on a single dedicated GPU independently, you could just push each model to a GPU via:
modelA = modelA.to('cuda:0')
modelB = modelB.to('cuda:1')
...
and evaluate each model separately.
Since CUDA calls are asynchronous, the GPUs shouldn’t block each other, if you don’t synchronize them manually. |
st177792 | Zdeeno:
OSError: [Errno 12] Cannot allocate memory
If the cannot allocate memory issues still persist, you might try allocating additional swap space on your machine (see https://github.com/pytorch/pytorch/issues/4387 8 for more details). |
st177793 | what if you have a large amount of data to process (evaluate, not train) and you want to take advantage of mulitple gpus? in this case would it be best to use the DataParallel class and just use a larger batch size? or would replicating the model using something similar to the example:
modelA = modelA.to(‘cuda:0’)
modelB = modelB.to(‘cuda:1’)
be the way to go? |
st177794 | If the model fits on a single GPU, you could use a data parallel approach such as nn.DataParallel.
Model sharding could also be used to increase the batch size, but I think a data parallel approach might be easier to apply, since splitting the model is two parts which have approx. an equal memory usage might be more tricky.
Note that if you are usingDistributedDataParallel during the evaluation, you would have to make sure the complete validation set is used only once. By default the DistributedSampler will add extra samples to make the dataset evenly divisible as seen in this line of code 16. |
st177795 | I have four gpus, I make one program to load four models and inference each model with shared data in each single gpus, then merge each model’s results. For example:
models = [model1.to('cuda:0'), model2.to('cuda:1'), model3.to('cuda:2')]
data = get_data()
results = []
for index, model in enumerate(models):
data = data.to('cuda:%d'%index)
r = model(data)
result.append(r)
final_result = fusion_result(result)
But right now, each model can only get data from a single process, can I get data and share to multi-model and do asynchronous inference by multiprocessing? Many thanks. |
st177796 | Since the models and data tensors are already pushed to different devices, each execution is already performed asynchronously, which should also be visible in e.g. nvprof or NSIGHT. |
st177797 | According to my code, the GPU will do inference only call model(data) , which means if my model is so complex, another three gpu will stop running for a while until one gpu finish inference. I am wonder that can those inferences doing at the same time with shared data? Thank you so much. |
st177798 | model(data) will execute the forward pass on the used device inside the model.
If your model implementation is only using a single device (i.e. no model sharding is used), the execution will begin asynchronously.
The next model(data) call, will launch the execution on the next specified device (assuming you’ve created copies of the model and data on other devices). |
st177799 | Yes, but the remaining device will suspend until the next model(data) call, which spends a lot of time. What I want to do right now is send the shared data to different devices at the same time, then each device will execute the model(data) forward asynchronously and merge the result in the host process. Any idea to do that? Thank you so much for your reply. |
st177800 | You can use SimpleQueue in torch.multiprocessing to do that. E.g., you can create a queue between the host process and each subprocess, and use the queue to pass input data and collect output. The test below can serve as an example:
github.com
pytorch/pytorch/blob/0c5cd8c2b9cdf473e30bbb1b49ca80ed442813df/test/test_multiprocessing.py#L577-L600 1
@unittest.skipIf(NO_MULTIPROCESSING_SPAWN, "Disabled for environments that \
don't support multiprocessing with spawn start method")
@unittest.skipIf(not TEST_CUDA_IPC, 'CUDA IPC not available')
def test_event_multiprocess(self):
event = torch.cuda.Event(enable_timing=False, interprocess=True)
self.assertTrue(event.query())
ctx = mp.get_context('spawn')
p2c = ctx.SimpleQueue()
c2p = ctx.SimpleQueue()
p = ctx.Process(
target=TestMultiprocessing._test_event_multiprocess_child,
args=(event, p2c, c2p))
p.start()
c2p.get() # wait for until child process is ready
torch.cuda._sleep(50000000) # spin for about 50 ms
event.record()
p2c.put(0) # notify child event is recorded
This file has been truncated. show original |
st177801 | Thank you so much, I find that the pickle and unpickle spend a lot of time when transfer the numpy image to subprocessing queue.put(img) and get result queue.get() . It is not efficient than loop (in single process). Do you have some idea to save this problem ? Many thanks. |
st177802 | Does using a shared_memory tensor help in this case? See the doc below:
pytorch.org
Multiprocessing best practices — PyTorch 1.6.0 documentation 5 |
st177803 | @kehuantiantang can you please clarify the nature of the data, is it pinned memory on CPU or it is on GPU? It might be more efficient to put it on one of the GPU devices and share it with others via IPC instead of transferring it from device to device using .to() call.
Also it seems that you are looking for solution to evenly distribute load between GPUs as one of the models are much faster than others. In this case I recommend to try outer loop which will rotate models between GPUs (if memory allows) and avoid fusion_result calls as long as possible as it is looks like your synchronization point. |
st177804 | Sorry. I am not clarify so clearly, my data is a numpy image which shape is 416x416x3, because my model is so large, it cannot load into one gpu at same time, so I use model1.to(cuda:0), data.to('cuda:0) and rotate models between GPUs by for loop. seem like that:
image891×648 234 KB
All the model run in single processes, and can only inference in one GPU(other will suspend). |
st177805 | Yes, I try this one, to deliver the data to each sub-process by queue. I find that queue need to pickle and unpickle operation, but my data is numpy image which has shape 416x416x3. The pickle and unpickle speed a lot of time. It is not efficient than for loop, so I give up. Thanks a lot. |
st177806 | I’m training a modified ResNet on multiple GPUs. Here is the residual block for the ResNet:
class BasicBlockWOutput(nn.Module):
expansion = 1
def __init__(self, in_channels, channels, params, stride=1):
super(BasicBlockWOutput, self).__init__()
add_output = params[0]
num_classes = params[1]
input_size = params[2]
self.output_id = params[3]
self.depth = 2
layers = nn.ModuleList()
conv_layer = []
conv_layer.append(nn.Conv2d(in_channels, channels, kernel_size=3, stride=stride, padding=1, bias=False))
conv_layer.append(nn.BatchNorm2d(channels))
conv_layer.append(nn.ReLU())
conv_layer.append(nn.Conv2d(channels, channels, kernel_size=3, stride=1, padding=1, bias=False))
conv_layer.append(nn.BatchNorm2d(channels))
layers.append(nn.Sequential(*conv_layer))
shortcut = nn.Sequential()
if stride != 1 or in_channels != self.expansion*channels:
shortcut = nn.Sequential(
nn.Conv2d(in_channels, self.expansion*channels, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*channels)
)
layers.append(shortcut)
layers.append(nn.ReLU())
self.layers = layers
if add_output:
self.output = af.InternalClassifier(input_size, self.expansion*channels, num_classes)
self.no_output = False
else:
self.output = None
self.forward = self.only_forward
self.no_output = True
def forward(self, x):
fwd = self.layers[0](x) # conv layers
fwd = fwd + self.layers[1](x) # shortcut
return self.layers[2](fwd), 1, self.output(fwd) # output layers for this module
def only_output(self, x):
fwd = self.layers[0](x) # conv layers
fwd = fwd + self.layers[1](x) # shortcut
fwd = self.layers[2](fwd) # activation
out = self.output(fwd) # output layers for this module
return out
def only_forward(self, x):
fwd = self.layers[0](x) # conv layers
fwd = fwd + self.layers[1](x) # shortcut
return self.layers[2](fwd), 0, None # activation
When I’m running it on multiple GPUs I’m having device mismatch issues:
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File “/export/mlrg/sshekhar/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py”, line 60, in _worker
output = module(*input, **kwargs)
File “/export/mlrg/sshekhar/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/export/mlrg/sshekhar/XAI/Shallow-Deep-Networks-gpub/architectures/SDNs/ResNet_SDN.py”, line 159, in forward
fwd, is_output, output = layer(fwd)
File “/export/mlrg/sshekhar/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/export/mlrg/sshekhar/XAI/Shallow-Deep-Networks-gpub/architectures/SDNs/ResNet_SDN.py”, line 68, in only_forward
fwd = self.layers0 # conv layers
File “/export/mlrg/sshekhar/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/export/mlrg/sshekhar/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py”, line 117, in forward
input = module(input)
File “/export/mlrg/sshekhar/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/export/mlrg/sshekhar/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py”, line 419, in forward
return self._conv_forward(input, self.weight)
File “/export/mlrg/sshekhar/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py”, line 416, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected tensor for argument #1 ‘input’ to have the same device as tensor for argument #2 ‘weight’; but device 1 does not equal 0 (while checking arguments for cudnn_convolution) |
st177807 | It seems like your input is on GPU 1, but your network is on GPU 0. From the error trace, it seems like the issue stems from how the only_forward function is defined and passed as a reference to self.forward. This looks similar to the issue described in https://github.com/pytorch/pytorch/issues/8637 5 - check out the suggestion by Ssnl about why duplicating non-tensor objects in DataParallel could lead to these device mismatch errors. |
st177808 | So I was able to resolve my issue based on Omkar’s suggestion to look at https://github.com/pytorch/pytorch/issues/8637 5
I changed the following: Instead of binding forward to the only_forward method, I directly call it inside the forward method:
if add_output:
self.output = af.InternalClassifier(input_size, self.expansion*channels, num_classes)
self.no_output = False
else:
self.output = None
#self.forward = self.only_forward
self.no_output = True
def forward(self, x):
if self.no_output:
return self.only_forward(x)
else:
fwd = self.layers[0](x) # conv layers
fwd = fwd + self.layers[1](x) # shortcut
return self.layers[2](fwd), 1, self.output(fwd) # output layers for this module
def only_output(self, x):
fwd = self.layers[0](x) # conv layers
fwd = fwd + self.layers[1](x) # shortcut
fwd = self.layers[2](fwd) # activation
out = self.output(fwd) # output layers for this module
return out
def only_forward(self, x):
fwd = self.layers[0](x) # conv layers
fwd = fwd + self.layers[1](x) # shortcut
return self.layers[2](fwd), 0, None # activation |
st177809 | I have a module like this:
class Block(nn.Module):
def __init__(self, net):
super(Block, self).__init__()
self.net = net
self.net_copy = copy.deepcopy(net)
def forward(self, x):
self.net_copy.load_state_dict(self.net.state_dict())
return self.net(x)
The net is an nn.Sequential() module. When I use Pytorch>=1.5 and use nn.DataParallel in multi-GPUs, It shows that net_copy.state_dict().keys() is different with net.state_dict().keys(). However, when I use Pytorch==1.4 or single-GPU, this problem doesn’t appear. How can I make sure that net and net_copy is exactly the same? |
st177810 | Solved by mrshenli in post #2
This is probably due to this PR: https://github.com/pytorch/pytorch/pull/33907
In v1.5, parameters on replicated models are no longer considered as leaves, as they shouldn’t be. If you really need to access those replicated parameters, you probably can get them from _former_parameters and manually … |
st177811 | This is probably due to this PR: https://github.com/pytorch/pytorch/pull/33907 5
In v1.5, parameters on replicated models are no longer considered as leaves, as they shouldn’t be. If you really need to access those replicated parameters, you probably can get them from _former_parameters and manually add them into the stat_dict?
github.com
pytorch/pytorch/blob/c93e96fbd9903e576c6c1aa2fe12d8d548ae2d5b/torch/nn/parallel/replicate.py#L148 2
replica._parameters[key] = None
else:
param_idx = param_indices[param]
for j in range(num_replicas):
replica = module_copies[j][i]
param = param_copies[j][param_idx]
# parameters in replicas are no longer leaves,
# so setattr them as non-parameter attributes
setattr(replica, key, param)
# expose the parameter for DDP
replica._former_parameters[key] = param
for key, buf in module._buffers.items():
if buf is None:
for j in range(num_replicas):
replica = module_copies[j][i]
replica._buffers[key] = None
else:
if buf.requires_grad and not detach:
buffer_copies = buffer_copies_rg
buffer_idx = buffer_indices_rg[buf]
else:
cc @ngimel please correct me if I am wrong. And any thoughts on whether we should make state_dict() consistent between v1.4 vs v1.5? |
st177812 | In order to access the _former_parameters, we would need to access replica, right? Can you help me figure out how to access _former_parameters in OP’s example?
Or how to recreate state dict in some other manner? |
st177813 | Hey @aashaka
Below is the implementation of the DataParallel.forward method. It basically calls replicas[i].forward(inputs[i], ...). So during execution, the self variable in the forward function is the replica. Hence, you can use self._former_parameters to access the field in forward function.
github.com
pytorch/pytorch/blob/c3466dabaae9328b207804afb043b7b519f64825/torch/nn/parallel/data_parallel.py#L147-L162 1
def forward(self, *inputs, **kwargs):
if not self.device_ids:
return self.module(*inputs, **kwargs)
for t in chain(self.module.parameters(), self.module.buffers()):
if t.device != self.src_device_obj:
raise RuntimeError("module must have its parameters and buffers "
"on device {} (device_ids[0]) but found one of "
"them on device: {}".format(self.src_device_obj, t.device))
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
if len(self.device_ids) == 1:
return self.module(*inputs[0], **kwargs[0])
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
outputs = self.parallel_apply(replicas, inputs, kwargs)
return self.gather(outputs, self.output_device) |
st177814 | I managed to recreate the state_dict using code similar to state_dict 5. Thanks for your help.
I noticed that _former_parameters exists in 1.5.1 but not in 1.5.0. It seems tricky to get the parameters in 1.5.0 if we do not know the names of the parameters in advance (but still possible since we are setting attr). Any suggestions for this? |
st177815 | Hey @aashaka, yep, we added _former_parameters after v1.5 to fix the regression caused on https://github.com/pytorch/pytorch/pull/33907.
If this has become very inconvenient for you, I would suggest switch to DistributedDataParallel. There are more discussions here: https://github.com/pytorch/pytorch/issues/36268 1 |
st177816 | Thanks a lot. I have one last question. Like the OP, I need to recreate the state dict every time in the forward pass. I see about 8x increase in training time when compared to original PyTorch DataParallel. Any ideas why this might be the case?
def create_state_dict_new(main_module):
state_dict_data = OrderedDict()
def state_dict_recursion(this_module, state_dict_data, prefix=''):
if hasattr(this_module,"_former_parameters"):
for name, param in this_module._former_parameters.items():
if param is not None:
state_dict_data[prefix + name] = param
for name, buf in this_module._buffers.items():
if buf is not None:
state_dict_data[prefix + name] = buf
for name, module in this_module._modules.items():
if module is not None:
state_dict_recursion(module, state_dict_data, prefix + name + '.')
state_dict_recursion(main_module._modules['model'], state_dict_data)
return state_dict_data
class ModelWrapper(torch.nn.Module):
def __init__(self, model):
super(ModelWrapper, self).__init__()
self.model = model
def forward(self, x):
state_list = create_state_dict_new(self)
return model(x)
model = torch.nn.DataParallel(ModelWrapper(model)) |
st177817 | Could you please measure the time spent on the create_state_dict_new?
The forward function will be launched in each thread. If you have 4 GPUs, it means that there will be 4 threads executing that create_state_dict_new independently. However, due to Python GIL, the 4 threads cannot run the function concurrently, which would further exacerbate the delay. |
st177818 | I used LSTMCell for decoders .And my decoder module looks like this :decoders = nn.ModuleList([Decoder(args, gpu) for i in range(args.max_len)])
I changeded it for parallel using
decoders = nn.parallel.DistributedDataParallel(decoders,
device_ids=[gpu])
And when I wrote this
output = decoders [i] (input)
the error raised
TypeError: ‘DistributedDataParallel’ object does not support indexing
How can I fix this? |
st177819 | Hey @yhz_yhz, if you would like to access the original module that you passed to DistributedDataParallel ctor, you can use decoders.module. See the code below.
github.com
pytorch/pytorch/blob/172ed51a17e70aabfbdc096491fde79755de9d08/torch/nn/parallel/distributed.py#L384 1
output_device = device_ids[0]
self.output_device = _get_device_index(output_device, True)
if process_group is None:
self.process_group = _get_default_group()
else:
self.process_group = process_group
self.dim = dim
self.module = module
self.device = list(self.module.parameters())[0].device
self.broadcast_buffers = broadcast_buffers
self.find_unused_parameters = find_unused_parameters
self.require_backward_grad_sync = True
self.require_forward_param_sync = True
self.ddp_join_enabled = False
self.gradient_as_bucket_view = gradient_as_bucket_view
if hasattr(module, '_ddp_params_and_buffers_to_ignore'):
self.parameters_to_ignore = module._ddp_params_and_buffers_to_ignore
else: |
st177820 | Can we have communication primitives such as send and recv when using the TensorPipe backend? I want to use it for inference between processes, having TensorPipe figure out the best way of communications. Is there any other way to implement it using the current RPC - TensorPipe api? |
st177821 | Hey @ItamarWilf,
Can we have communication primitives such as send and recv when using the TensorPipe backend?
Yep, it should be possible to use ProcessGroup send/recv in conjunction with RPC. But we don’t yet have a TensorPipe backend for ProcessGroups. So the send/recv API needs to choose from gloo, nccl, and mpi backends.
One caveat is that, ProcessGroup send/recv requires the orders to match on the sender and receiver. However, RPC by-itself does not guarantee orders, as it will grab threads from a pool to process requests concurrently. So, if you are using ProcessGroup send/recv within RPC functions, you might need to enforce the order in user functions.
cc @lcw
I want to use it for inference between processes, having TensorPipe figure out the best way of communications. Is there any other way to implement it using the current RPC - TensorPipe api?
Yep, this should be possible. You can, e.g., using the RPC to implement a queue, and then call enqueue on the sender and dequeue on the receiver. Then enqueue/dequeue will be similar to send/receive.
Please let us know if this would work for you. |
st177822 | Echoing @mrshenli, if all you need is send and recv, you may be better off using the RPC primitives, which were designed for this type of point-to-point communication, rather than using the ProcessGroup interface. There are currently no plans to expose TensorPipe as a ProcessGroup backend. |
st177823 | Is it possible to have a dynamic world size when using torch.distributed.rpc?
I want to have a changing number of processes communicating using the TensorPipe backend, without explicitly stating a world size, having each process dynamicaly assigned a rank. |
st177824 | Hey @ItamarWilf,
Unfortunately, this is not yet possible with the RPC package, but it is in our roadmap. |
st177825 | I am trying to train a network with “Distributed Data Parallel” on multiple nodes, each having a different public IP address by sshing simultaneously into these nodes using “pdsh” coordination tool as suggested in this tutorial 22.
Specifically, given a local machine with public IP address “Ip0” and 2 remote nodes with Public IP address “Ip1” and “Ip2” respectively on which training is to be performed remotely from the local machine, how to go about making such a set-up?
Also, how to ensure before running the training script on each remote node that the 2 remote nodes have access to each other?
Thanks in advance. |
st177826 | Solved by mrshenli in post #4
No, you don’t need to ssh from node-1 to other nodes to launch the script. The ping/ssh I mentioned is only to check what IP would work. If you confirm that the IP of one node is accessible for all other nodes, you can set that node as master. This is only for rendezvous, and all nodes will use the… |
st177827 | arshagarwal:
Also, how to ensure before running the training script on each remote node that the 2 remote nodes have access to each other?
Can you try ssh to one of the remote machine and then ping/ssh another remote machine?
Specifically, given a local machine with public IP address “Ip0” and 2 remote nodes with Public IP address “Ip1” and “Ip2” respectively on which training is to be performed remotely from the local machine, how to go about making such a set-up?
One of the remote machine can serve as the master, i.e., using it’s IP address as the MASTER_ADDR and pick a port for MASTER_PORT. |
st177828 | If you mean ssh-ing into node-2 from node-1, yes I can do that. However, what if I were to do this with n (say 10) nodes? Should I then ssh into remote node-1 and then from the remote node-1 terminal use “pdsh” to ssh into all other nodes simultaneously? @mrshenli |
st177829 | arshagarwal:
Should I then ssh into remote node-1 and then from the remote node-1 terminal use “pdsh” to ssh into all other nodes simultaneously?
No, you don’t need to ssh from node-1 to other nodes to launch the script. The ping/ssh I mentioned is only to check what IP would work. If you confirm that the IP of one node is accessible for all other nodes, you can set that node as master. This is only for rendezvous, and all nodes will use the rendezvous process to discover each other automatically. |
st177830 | Hi, all
I was trying out a very simple example to use DistributedDataParallel but the code got stuck at data loading for some reason. The code I used is pasted below in its entirety.
import os
import time
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from torchvision import datasets, transforms
from torch import nn
class Model(nn.Module):
def __init__(self, num_classes=10):
super().__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.reshape(x.size(0), -1)
x = self.fc(x)
return x
def main(rank, world_size):
# Initialisation
dist.init_process_group(
backend="nccl",
init_method="env://",
world_size=world_size,
rank=rank
)
# Fix random seed
torch.manual_seed(0)
# Initialize network
net = Model()
net.cuda(rank)
# Initialize loss function
criterion = torch.nn.CrossEntropyLoss().to(rank)
optimizer = torch.optim.SGD(net.parameters(), 1e-4)
net = torch.nn.parallel.DistributedDataParallel(net, device_ids=[rank])
# Prepare dataset
trainset = datasets.MNIST('./data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
)
# Prepare sampler
train_sampler = torch.utils.data.distributed.DistributedSampler(
trainset, num_replicas=world_size, rank=rank
)
# Prepare dataloader
train_loader = torch.utils.data.DataLoader(
trainset, batch_size=100, shuffle=False,
num_workers=0, pin_memory=True, sampler=train_sampler)
epoch = 0
iteration = 0
for _ in range(5):
epoch += 1
train_loader.sampler.set_epoch(epoch)
timestamp = time.time()
print("Rank: {}. Before dataloader".format(rank))
for batch in train_loader:
print("Rank: {}. Batch loaded".format(rank))
inputs = batch[0]
targets = batch[1]
iteration += 1
inputs = inputs.cuda(rank, non_blocking=True)
targets = targets.cuda(rank, non_blocking=True)
output = net(inputs)
loss = criterion(output, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if __name__ == '__main__':
# Number of GPUs to run the experiment with
WORLD_SIZE = 2
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "8888"
mp.spawn(main, nprocs=WORLD_SIZE, args=(WORLD_SIZE,))
As I ran the program, the print message I got looks like this
Rank: 1. Before dataloader
Rank: 0. Before dataloader
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
Rank: 0. Batch loaded
And from the GPU utilisation, I noticed that the first GPU (corresponding to subprocess with rank 0) is at its full capacity (100%) while the second one is at 0%. And more interestingly, the second subprocess (with rank 1) also occupied a small amount of memory in the first GPU. I can’t seem to figure out the problem. Please let me know if you spotted anything that might help.
Many thanks,
Fred |
st177831 | Solved by DzReal in post #3
Hi, Rohan
Thanks for your attention. I’ve managed to resolve the hang by adding torch.cuda.set_device(rank) before the training loop. This stopped subprocesses with rank larger than 0 from allocating memory on cuda:0 i.e. the device used for subprocess with rank 0.
Cheers,
Fred |
st177832 | Thanks for reporting this issue! I confirm that I can indeed reproduce this issue and have filed a bug over at https://github.com/pytorch/pytorch/issues/46259 28 to get more discussion on this. |
st177833 | Hi, Rohan
Thanks for your attention. I’ve managed to resolve the hang by adding torch.cuda.set_device(rank) before the training loop. This stopped subprocesses with rank larger than 0 from allocating memory on cuda:0 i.e. the device used for subprocess with rank 0.
Cheers,
Fred |
st177834 | I want to benchmark how quickly PyTorch with the Gloo backend is able to all-reduce all-gather a model synchronously. To do so, I’ve written the following script [2] working with the latest Gloo backend / PyTorch. I start it on N machines, and then they together all-reduce it fine. However, the bandwidth that I see, irrespective of N, is 0.5 * link_bandwidth. The N machines are all connected to a 100 Mbps per-port switch. This is expected with a large N, as the documentation does state that it uses a ring all-reduce/all-gather to perform the distributed.all_reduce behind that scenes.
However, for i.e. N=2 I would expect it to perform the all-reduce at 1.0 * link_bandwidth, as one node only needs to send to one other node its full model. My experiments show however 0.5 * link_bandwidth [3]. I would expect the bandwidth of the all-reduce for any N to be [1]:
Amount of data to send in all-reduce: (N - 1) / N
Amount of data to send in all-gather: (N - 1) / N
Total data for each node to send: 2 * (N - 1) / N
I.e., at N=2 we only need to send 2 * 1/2 = 1x model, and at N->inf we have to send 2x model. Translating into a all-reduce/all-gather bandwidth of 1.0 * link_bandwidth (N=2) and 0.5 * link_bandwidth (N->inf).
I am not sure where my calculation is wrong, or where I am misunderstanding the all-reduce ring method.
[1] https://github.com/zhangruiskyline/DeepLearning/blob/master/doc/system.md#allreduce-in-practice 17
[2] allreduce.py – execute for N=2 on each of the two machines: python allreduce.py 0 2 192.168.0.1 10000 and python allreduce.py 1 2 192.168.0.1 10000
#!/usr/bin/env python
import os
import sys
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
import time
# Values are 4 bytes each, so 2 * 1000 * 1000 * 32 * 4 = 256 MB = 2048 Mbit
MODEL_SIZE_VALUES = 2 * 1000 * 1000 * 32
BIT_PER_VALUE = 4 * 8
BITS_PER_MBIT = 1000 * 1000
def current_time_in_ms():
return int(round(time.time() * 1000))
def run(rank, size):
group = dist.new_group(list(range(size)))
tensor = torch.ones(MODEL_SIZE_VALUES, dtype=torch.float32)
print("Performing allreduce...")
print(" > Data to send: %d Mbit" % ((MODEL_SIZE_VALUES * BIT_PER_VALUE) / float(BITS_PER_MBIT)))
start = current_time_in_ms()
dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=group)
elapsed_ms = current_time_in_ms() - start
print(" > Finished.")
print(" > Time: %.2f s" % (elapsed_ms / 1000.0))
print(" > Speed: %.2f Mbit/s" % ((MODEL_SIZE_VALUES * BIT_PER_VALUE / BITS_PER_MBIT) / float(elapsed_ms / 1000.0)))
print(' > Result: Rank ', rank, ' has data ', str(tensor), '.\n')
def init_process(my_rank, size, master_address, master_port, fn, backend='gloo'):
# Initialize the distributed environment
os.environ['MASTER_ADDR'] = master_address
os.environ['MASTER_PORT'] = master_port
# Initialize process group
print("Initializing process group...")
dist.init_process_group(backend, rank=my_rank, world_size=size)
print(" > Initialized.")
print("")
fn(my_rank, size)
def main(my_rank, size, master_address, master_port):
p = Process(target=init_process, args=(my_rank, size, master_address, master_port, run))
p.start()
p.join()
if __name__ == "__main__":
args = sys.argv[1:]
if len(args) != 4:
print("Usage: python allreduce.py <my rank> <size> <master address> <master port>")
exit(1)
else:
main(int(args[0]), int(args[1]), str(args[2]), str(args[3]))
[3] Output of machine 1 and 2 with N=2:
Machine 1:
Initializing process group...
> Initialized.
Performing allreduce...
> Data to send: 2048 Mbit
> Finished.
> Time: 44.79 s
> Speed: 45.72 Mbit/s
> Result: Rank 0 has data tensor([2., 2., 2., ..., 2., 2., 2.]) .
Machine 2:
Initializing process group...
> Initialized.
Performing allreduce...
> Data to send: 2048 Mbit
> Finished.
> Time: 44.79 s
> Speed: 45.72 Mbit/s
> Result: Rank 1 has data tensor([2., 2., 2., ..., 2., 2., 2.]) . |
st177835 | Thanks for posting the question. Your analysis is correct, it should be equal to link bandwidth for N=2 (provided the input dimensions are large enough for the runtime to be dominated by the bandwidth). The implementation in Gloo ensures there is always 1 chunk in flight and 1 chunk being reduced, so it is possible there is something wrong there for the N=2 case.
I’ll investigate and post back here.
edit: I added https://github.com/facebookincubator/gloo/issues/169 13 to not lose track of it. |
st177836 | I got around to running a test with the Gloo benchmark tool and confirmed the issue.
For a larger explanation of the issue and the fix, see https://github.com/facebookincubator/gloo/pull/192 27.
Once this is merged and bumped in PyTorch, I expect you’ll be able to run the same test and find the bandwidth to be very close to link speed. |
st177837 | I load my trained model from checkpoint for a fine-tune training.
then when I do:
model.eval()
model(x)
output seems OK, loss scale is same as at the end of pre-train.
but for:
model.train()
model(x)
output is totally different, very bad - just like it’s a “training from scratch”.
the model pretrained with DDP
the model has BN layers
Am I doing something wrong?
Thanks |
st177838 | For model.train are you also updating the parameters? If you want to do inference on your model you should use
with torch.no_grad():
model.eval()
# inference |
st177839 | In addition, you can try setting torch.backends.cudnn.enabled = False when training using SyncBatchNorm and DDP, as discussed in Training performance degrades with DistributedDataParallel 4. |
st177840 | How is Multiple node, Multiple worker Allreduce implemented in PyTorch?
I know that in a single node multi-worker setting, allreduce is implemented with a ring allreduce algorithm. How does this change in a multinode setting? |
st177841 | Solved by mrshenli in post #2
Hey @vineeths, PyTorch distributed all_reduce calls into the allreduce API provided by the communication backend (Gloo, NCCL, and MPI). Gloo uses ring allreduce. NCCL has both ring and tree allreduce. See this discussion: https://github.com/NVIDIA/nccl/issues/256 |
st177842 | Hey @vineeths, PyTorch distributed all_reduce calls into the allreduce API provided by the communication backend (Gloo, NCCL, and MPI). Gloo uses ring allreduce. NCCL has both ring and tree allreduce. See this discussion: https://github.com/NVIDIA/nccl/issues/256 5 |
st177843 | I am using torch 1.6. When I try to run the code, it tells me like below:
AttributeError: ‘torch.distributed.rpc.TensorPipeRpcBackendOptions’ object has no attribute ‘num_send_recv_threads’
Could anyone please tell me what possible reasons could be?
details are like below:
File “rpc_pipeline.py”, line 291, in setup
rpc_backend_options=options
File “/conda/lib/python3.7/site-packages/torch/distributed/rpc/init.py", line 90, in init_rpc
rpc_backend_options=options
File "/conda/lib/python3.7/site-packages/torch/distributed/rpc/init.py”, line 90, in init_rpc
api._init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File “/conda/lib/python3.7/site-packages/torch/distributed/rpc/api.py", line 299, in _init_rpc_backend
api._init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/conda/lib/python3.7/site-packages/torch/distributed/rpc/api.py”, line 299, in _init_rpc_backend
api._init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File “/conda/lib/python3.7/site-packages/torch/distributed/rpc/api.py", line 299, in _init_rpc_backend
rpc_backend_options=rpc_backend_options,
File "/conda/lib/python3.7/site-packages/torch/distributed/rpc/backend_registry.py”, line 94, in init_backend
rpc_backend_options=rpc_backend_options,
File “/conda/lib/python3.7/site-packages/torch/distributed/rpc/backend_registry.py", line 94, in init_backend
rpc_backend_options=rpc_backend_options,
File "/conda/lib/python3.7/site-packages/torch/distributed/rpc/backend_registry.py”, line 94, in init_backend
return backend.value.init_backend_handler(args, **kwargs)
File "/conda/lib/python3.7/site-packages/torch/distributed/rpc/backend_registry.py", line 144, in _process_group_init_backend_handler
return backend.value.init_backend_handler(args, **kwargs)
File "/conda/lib/python3.7/site-packages/torch/distributed/rpc/backend_registry.py", line 144, in _process_group_init_backend_handler
rpc_backend_options.num_send_recv_threads,
AttributeError: ‘torch.distributed.rpc.TensorPipeRpcBackendOptions’ object has no attribute ‘num_send_recv_threads’
rpc_backend_options.num_send_recv_threads,
AttributeError: ‘torch.distributed.rpc.TensorPipeRpcBackendOptions’ object has no attribute ‘num_send_recv_threads’
return backend.value.init_backend_handler(args, **kwargs)
File "/conda/lib/python3.7/site-packages/torch/distributed/rpc/backend_registry.py", line 144, in _process_group_init_backend_handler
rpc_backend_options.num_send_recv_threads, |
st177844 | Solved by ConnollyLeon in post #2
I solve this after adding this argument.
[image] |
st177845 | Hello!
I try to build a distributed RL with PyTorch RPC, but I got a problem that I can’t use my GPU power fully. Here is my code structure. They are two parts, learner and actor.
class Learner:
def __init__(self):
self.policy = Policy()
self.buffer = container()
run_multiple_actor()
def learning_loop(self):
"backward & optimize"
while True:
get_trans_from_buffer()
computing_loss()
backward()
optimizer.step()
def get_action(self, state):
action = self.policy(state)
insert_to_buffer(state,action)
return action
class Actor:
def __init__(self):
self.env = env
self.learner_rref = rref
def act_loop(self):
while True:
state = self.env.step(action)
action = self.learner_rref.run_sync().get_action(state)
For learner, after initating, it runs in learning_loop. For actor, after initating, it runs in act_loop and call Learner’s get_action from remote.
The question is that could get_action threads run simultaneously on GPU? If they can, tt seems that as long as I run enough actors, I fully use my GPU power. However, after adding serveal actors, the volatile of GPU stop inscreasing and stays in a low level(eg. 20% or 30%). And I don’t think it’s a problem of my CPU cores. I have enough CPU to make all actors run simultaneously.
Could anyone point out what’s the problem of my code. I am new to PyTorch PRC, help me please. |
st177846 | Hey @LW-Ricarido, sorry about the delay.
If you use regular Python functions as RPC target, multiple requests cannot run in parallel on the callee side due to Python Global Interpreter Lock (GIL). To avoid the lock, you can convert your function into a TorchScript function by adding a @torch.jit.script decorator. Some examples are available here 1, please search for @torch.jit.script.
This is a more thorough doc for TorchScript.
Example tests can be found here. |
st177847 | Hey @mrshenli, thanks a lot. But I find some problems when I use @torch.jit.script decorator. When I use torch.jit.script , the complier cannot infer self.policy correctly, which is a nn.Module and an attribute of Learner. Should change class Learner and Actor to torch.jit.ScriptModule. And if I want to avoid GIL, should both callee function and caller be wrapped by @torch.jit.script? |
st177848 | Hey @LW-Ricarido
And if I want to avoid GIL, should both callee function and caller be wrapped by @torch.jit.script ?
Yep, current RPC system requires caller and callee functions to be the same. You can, for example, define those functions in a utility file and let both caller and callee import that file. |
st177849 | I’ve encounter a problem when sharing a model between processes, and it is critical to me (for memory resources).
I’ve been sharing a model between several processes (on linux, ubuntu). The model is used only for a forward pass, since it performs some sort of pre-processing for the samples (before fed to a different network). I’ve done everything I can to ensure that - the model is on eval mode, each parameter has ‘grad’ flag False, and forward is under ‘with torch.no_grad():’.
The problem is that after the new process is spawned, for some reason it allocates new memory on the GPU. At first I thought this memory is intermediate values of the computational graph, but then I noticed each process still allocates new memory on GPU, even when sleep is invoked (i.e. before even running data through the model), the memory is still allocated. Further more, it is a lot of memory relative to the model! The model is about 4GB (lets say 2GB of weights and 2GB of optimizer), the memory allocated is 1GB (!), which may also indicate that the network is not completely replicated, only a part of it.
Here is an example code, it think it contains the must critical parts of what I’m doing
def inferrerFunc(neuralNetwork):
#If we use sleep here, memory is still allocated on GPU
#time.sleep(1000)
#Imagine theres a dataset here,,,
for x in dataset:
y_t = neuralNetwork(x)
class mainProc():
def __init__(self):
self.neuralNetwork = neuralNetwork()
torch.multiprocessing.set_start_method('spawn', force=True)
self.neuralNetwork.share_memory()
self.neuralNetwork.eval()
def startInferrer(self):
self.inferrer = torch.multiprocessing.Process(target = inferrerFunc, args = (self.neuralNetwork,))
self.inferrer.start() |
st177850 | Solved by radim_shark in post #3
It turns out that every-time a process holds any pytorch object that is allocated on the GPU, then it allocates an individual copy of all the kernels (cuda functions) that pytorch uses, which is about 1GB.
It seems there is no way around it, and if your machine has Xgb of GPU RAM, then you’re limit… |
st177851 | When you pass self.neuralNetwork as a parameter, I believe it is pickled and then unpickled in the child process. The unpickling process most re allocating some memory. Note that share_memory only applies to CPU tensors and not GPU tensors. The pickling usually happens in spawn mode, you can try and use fork to see if that resolves the issue. |
st177852 | It turns out that every-time a process holds any pytorch object that is allocated on the GPU, then it allocates an individual copy of all the kernels (cuda functions) that pytorch uses, which is about 1GB.
It seems there is no way around it, and if your machine has Xgb of GPU RAM, then you’re limited to X processes. The only way around it is dedicating one process to hold the pytorch module and act with the other processes in a producer-consumers pattern, which is a real headache when it comes to scalability and much more for RT application . |
st177853 | Since this seems like a memory limitation imposed by PyTorch, feel free to file a GitHub issue over at https://github.com/pytorch/pytorch/issues. It would be valuable to have a repro where extra memory is allocated unexpectedly. |
st177854 | It is a known issue and as I understand changing that requires a massive change so its not even on the agenda. |
st177855 | Is there a good way to share information between DDP processes? Even if it’s just something from process-0 to the other processes.
My use-case is that I need to coordinate some data-loading/sampling across my different processes, and it would be good if I could, for example, determine what to sample in process-0 and distribute that information to the other processes. |
st177856 | You can use point-to-point communication 5 or collective operations 4.
For example, a tensor could be sent by torch.distributed.send and received by torch.distributed.recv functions by specifying the target ranks.
You can choose to broadcast or reduce if you wish. I usually use torch.disbtibuted.all_reduce function to collect loss information between processes.
Example here 3.
If you use nccl backend, you can only use CudaTensor for communication. |
st177857 | In addition to the above response, standard multiprocessing communication methods such as Queue or mp.Manager should work as well: https://docs.python.org/3/library/multiprocessing.html 5, assuming your processes are on the same machine.
If your particular use case is around data sampling, you could also look into DistributedSampler, which will automatically partition data out to DDP ranks for you: https://github.com/pytorch/pytorch/blob/c1e6592964261d2856c84e166a0989684f946697/torch/utils/data/distributed.py#L12 4
Finally, we have also have APIs such as all_gather_object and broadcast_object_list which can be used to communicate general picklable python objects across ranks: https://github.com/pytorch/pytorch/blob/master/torch/distributed/distributed_c10d.py#L1279 17. These APIs are quite new and subject to significant changes, but may be useful for your use case. |
st177858 | Hi there!
I’m using pytorch as an autograd library. Can someone provide a simple tutorial or a snippet for a simple example of multi-gpu processing?
If no gradient have to be generated the example in PyTorch: How to parallelize over multiple GPU using multiprocessing.pool 3
seems reasonable, but what to do if the gradients are needed?
Is there a tutorial on simple “mpi-like” calls reported in the doc? https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html 1
How to use the data_parallel function reported there?
I tried to apply the concepts in the DataParallel guide, but actually the code is slower if the model is encapsulated in the nn.DataPrallel. Clearly something is missing on my side.
import torch
from torch import Tensor
import torch.nn as nn
from torch.nn.parameter import Parameter
import time
class ModDef(nn.Module):
def __init__(self, input_size=100, output_size=100) -> None:
super(ModDef, self).__init__()
self.w1 = Parameter(torch.randn(1024,1024))
self.w2 = Parameter(torch.randn(1024,1024))
def forward(self, X: Tensor) -> Tensor:
output = torch.exp(100+(self.w1 * X - X.mean() / (self.w2 + 1))**2)
return output
if __name__ == '__main__':
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = ModDef()
print('send')
my = nn.DataParallel(model).to(device)
optimizer = torch.optim.SGD(my.parameters(), lr=1e-4)
datain = torch.randn(100,1024,1024).to(device)
d2 = torch.randn(100,1024,1024).to(device)
print('start')
ta = time.time()
for ii in range(100):
optimizer.zero_grad()
out = my(datain)
loss = (out - d2).sum()
loss.backward()
optimizer.step()
tb = time.time()
print(tb-ta)
print('end') |
st177859 | Hey @heavyfranz
Please see this section in the overview: https://pytorch.org/tutorials/beginner/dist_overview.html#data-parallel-training 6
DataParallel can be slower especially if the model is large, as it will do model replication, input scattering, and output gathering in each forward pass. DistributedDataParallel is expected to be faster in this case. |
st177860 | Dear @mrshenli,
thank you for the kind and fast answer Yeah, you’re right.
The following in your reply makes me drop my jaw:
it will do model replication […] in each forward pass
It was not clear for me from the beginning.
Somewhere I read that at least the scattering operation is done automatically segmenting the input tensor on the first axis, is it true?
Finally, just to know, in your opinion, is the code is at least correct for the DataParallel construct? |
st177861 | heavyfranz:
Somewhere I read that at least the scattering operation is done automatically segmenting the input tensor on the first axis, is it true?
Yes, this is truth. DataParallel will consider the first dimension as the batch dimension.
Finally, just to know, in your opinion, is the code is at least correct for the DataParallel construct?
It should work I think. But, usually the model is moved to the the target device before passing it to DataParallel ctor, like:
my = nn.DataParallel(model.to(device)) |
st177862 | Hi,
What is the most efficient way to implement a work pool in GPU cluster? I have tried with JoinableQueue, but it takes long time to get large item (e.g. a train batch) from the queue. Is there a better way to implement it? Is it possible to store data on GPU and shared by different processes?
And I read this documentation for shared memory: https://pytorch.org/docs/stable/notes/multiprocessing.html 1
Is this “shared memory” in CPU or GPU? What is the structure of it?
Thank you. |
st177863 | Hi, my code is in https://github.com/sangyx/dgl/tree/master/examples/pytorch/GATNE-T/src.
I run the main_sparse.py 1 will get acc 0.94. But the acc will not higher than 85% with main_sparse_multi_gpu.py 1 even I set the gpu=0.
Is there any error in my code?
My environment is Pytorch 1.6 and dgl-cu10.2 0.52. You can get the test data example in https://github.com/sangyx/dgl/tree/master/examples/pytorch/GATNE-T 2 |
st177864 | Hey @sangyx, when using DDP, you might need to tune the batch size and learning rate a bit. See the discussion below:
Should we split batch_size according to ngpu_per_node when DistributedDataparallel distributed
Assume we have two nodes: node-A and node-B, each has 4gpus(i.e. ngpu_per_node=4). We set args.batch_size = 256 on each node, means that we want each node process 256 images in each forward.
(1) If we use DistributedDataparallel with 1gpu-per-process mode, shall we manually divide the batchsize by ngpu_per_node in torch.utils.data.DataLoader : torch.utils.data.DataLoader(batch_size = args.batch_size / 4)(the way used in pytorch-imagenet-official-example). In my original opinion, I think Distrib… |
st177865 | When I use DataParallel(), the maximum batch size can be set to 512(cudnn.benchmark is disabled.), but DistributedDataParallel only supports setting batchSize to 128.
Could cudnn cause such a problem?
This is the main structure of my code.
if __name__ == '__main__':
...
# Multi GPU
print(f'Running DDP on rank: {args.local_rank}')
torch.cuda.set_device(args.local_rank)
dist.init_process_group(backend='nccl', init_method='env://')
main()
def main():
...
train_sampler = DistributedSampler(train_dataset)
same_seeds(args.seed)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=args.batch_size,
shuffle=(train_sampler is None),
num_workers=args.num_workers,
# worker_init_fn=_init_fn,
pin_memory=True,
sampler=train_sampler
)
...
# Creates a GradScaler once at the beginning of training.
scaler = GradScaler()
# Distribute model across all visible GPUs
net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net)
net = DDP(net, device_ids=[args.local_rank], output_device=args.local_rank)
cudnn.benchmark = True # enable cudnn
...
for epoch in range(start_epoch, args.epochs):
train_sampler.set_epoch(epoch)
train_loss, train_acc, batch_time = train(epoch, net, train_loader, criterion, optimizer,
warmup_scheduler, scaler)
...
def train(epoch, net, train_loader, criterion, optimizer, scheduler, scaler):
"""Train for one epoch."""
net.train()
for batch_idx, (images, targets) in enumerate(train_loader):
with autocast():
images, targets = images.cuda(), targets.cuda()
logits = net(images)
loss = criterion(logits, targets)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
... |
st177866 | Solved by ptrblck in post #2
AMP shouldn’t use more memory and I assume you are trying to use the global batch size in each process and thus GPU.
As explained here you should set the batch_size for each GPU as the local batch size (by dividing the global batch size by the number of GPUs). |
st177867 | AMP shouldn’t use more memory and I assume you are trying to use the global batch size in each process and thus GPU.
As explained here 6 you should set the batch_size for each GPU as the local batch size (by dividing the global batch size by the number of GPUs). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.