id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st177468 | Hi everyone,
I was trying to implement a distributed training for my model and I have found to have an issue when a new checkpoint (best model) has to be saved on disk. Whenever I try to execute my training with 2 processes (one process per GPU) using DistributedDataParallel, the process with rank 1 stops without any error in the output while the master process continues to work for a while (I think then it will be stopped since the process 1 is no more able to sync).
This problem arises whenever I use the following piece of code to save my checkpoint:
if rank == 0 and loss_trend['dev'][-1] < best_loss:
best_loss = loss_trend['dev'][-1]
if cfg.LOGGING.CHECKPOINTS:
assert ckp_file, 'Checkpoint file not defined'
state_dict = model.module.cpu().state_dict()
torch.save(state_dict, ckp_file)
model.cuda(gpu)
logger.log('best model saved at: {}'.format(ckp_file))
The problem is fixed I use deepcopy on my model without touching (without moving to cpu) the original one:
if rank == 0 and loss_trend['dev'][-1] < best_loss:
best_loss = loss_trend['dev'][-1]
if cfg.LOGGING.CHECKPOINTS:
assert ckp_file, 'Checkpoint file not defined'
state_dict = copy.deepcopy(model.module).cpu().state_dict()
torch.save(state_dict, ckp_file)
logger.log('best model saved at: {}'.format(ckp_file))
Since the deepcopy introduces some overhead, I want to know why it works in this case and if there are available other methods to solve my problem. Here my model is wrapped using DistributedDataParallel.
Thank you |
st177469 | What is the error that you encounter when you don’t deepcopy the model? Does rank 0 just get stuck and rank 1 exits successfully? If so, do you know where rank 0 is stuck? |
st177470 | I have some questions about using the torch.multiprocessing module. Let’s say I have a torch.nn.Module called model and I call model.share_memory() on it.
What happens if two threads call the forward(), i.e. model(input) at the same time? Is it safe? Or should I use Lock mechanisms to be sure that model is not accessed at the same time by multiple threads?
Similarly, what happens if two or more threads have an optimizer working on model.parameters() and they call optimizer.step() at the same time?
I ask these questions because I often see the optimizer.step() being called on shared models without lock mechanisms (i.e. in RL implementations of A3C or ACER) and I wonder if it is a safe thing to do. |
st177471 | Solved by pritamdamania87 in post #2
This really depends on the implementation of your forward function. Typically a forward function doesn’t modify any state so it is safe to call forward from two different threads. However if your forward function is modifying some state for some reason, there might be a race here if you have two th… |
st177472 | fedetask:
What happens if two threads call the forward(), i.e. model(input) at the same time? Is it safe? Or should I use Lock mechanisms to be sure that model is not accessed at the same time by multiple threads?
This really depends on the implementation of your forward function. Typically a forward function doesn’t modify any state so it is safe to call forward from two different threads. However if your forward function is modifying some state for some reason, there might be a race here if you have two threads calling forward.
fedetask:
Similarly, what happens if two or more threads have an optimizer working on model.parameters() and they call optimizer.step() at the same time?
In this case, the optimizers would step on each other and as you mention without any lock mechanisms there might be some inconsistency. However, many frameworks still do this without any lock mechanisms because they are leveraging HOGWILD!, which is basically a paper that showed your training can converge even if you don’t have strict locking around your parameter updates as long as your parameter updates are sparse. You can refer to the paper for more details on why and how this works.
The PyTorch Hogwild example does something similar: https://pytorch.org/docs/stable/notes/multiprocessing.html#hogwild 1. |
st177473 | I made some changes to the model’s forward pass in VL-BERT repository 1.
I was able to successfully run my script for training over multiple (7) GPUs. However after some time, suddenly, my code freezes when I increase the number of GPUs to more than 2 GPUs. I did not make any changes to the way model is passed for distributed training.
This is how I increase the number of GPUs used:
CUDA_VISIBLE_DEVICES=1,2,3,4 ./scripts/dist_run_single.sh 4 pretrain/train_end2end.py ./cfgs/contrastive_pretrain/base_prec_random_movienet_images_4x16G_fp32.yaml ./checkpoints_debugcv04
Gets stuck if GPUs are more than 2
instead of
CUDA_VISIBLE_DEVICES=1,2 ./scripts/dist_run_single.sh 2 pretrain/train_end2end.py ./cfgs/contrastive_pretrain/base_prec_random_movienet_images_4x16G_fp32.yaml ./checkpoints_debugcv04
Starts training successfully
The model is loaded on each of the local ranks successfully.
The script also enters the train function on each rank : https://github.com/jackroos/VL-BERT/blob/4373674cbf2bcd6c09a2c26abfdb6705b870e3be/common/trainer.py#L56 2
However, the forward pass doesn’t proceed.
I am using the latest version of PyTorch 1.7.0
What might be going wrong here? I assume some synchronization problems might be occurring with more than 2 GPUs
Thanks |
st177474 | However after some time, suddenly, my code freezes when I increase the number of GPUs to more than 2 GPUs.
At what point does the training get stuck? Do you have any logs outputted until the point the training gets stuck (ideally with NCCL_DEBUG=WARN)?
Also how many GPUs does this host have? Do you run into the same issue on other multi-GPU hosts as well?
I assume some synchronization problems might be occurring with more than 2 GPUs
This usually only becomes a major issue at much larger numbers of GPUs, so it should be able to handle more than 2. Are there any factors that could cause synchronization issues, such as some GPUs being significantly slower than others/other jobs or processes using those GPUs while you were training? |
st177475 | Hi @osalpekar, thanks for your response
I have figured out that the training runs fine when I remove the metric logger for MLMAccuracy 2
The place where it gets stuck is when the get() function 1 is called in MLMAccuracy when writing to tensorboard and logger. Somehow it is not able to do all_reduce here for sum_metric 2 for this particular metric when more number of GPUs are there.
In case it makes sense to see the log files as you suggested:
I have posted the output with NCCL_DEBUG=WARN in 2 files here for 2 cases (freezes and works_fine): https://gist.github.com/amogh112/84a27280e69b983ea88497892e3855cb 4
And the comment at the end shows the output for NCCL_DEBUG=INFO for different cases with 1,2,6 GPUs |
st177476 | amogh112:
The place where it gets stuck is when the get() function is called in MLMAccuracy when writing to tensorboard and logger. Somehow it is not able to do all_reduce here for sum_metric for this particular metric when more number of GPUs are there.
Can you verify that this allreduce gets called on all ranks? One possibility could be that some ranks don’t invoke this allreduce which would result in a freeze. |
st177477 | Hi, I want to launch 4 processes (two processes per node) on a distributed memory system
Each node in the system has 2 GPUs
So, the layout is the following:
Node 1
rank 0 on GPU:0
rank 1 on GPU:1
Node 2
rank 2 on GPU:0
rank 3 on GPU:1
I am trying to use this 10 from pytorch documentation
I am using singularity containerization and mpiexec in a script in the following way:
First I do:
qsub -n 2 -t 5 -A myproject ./Script.sh
which ask for 2 nodes during 5 minutes,
inside the script we have the following command
mpiexec -n 4 -f $COBALT_NODEFILE singularity exec --nv -B $mscoco_path $centos_path python3.8 -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --node_rank=$myrank --master_addr="192.168.1.1" --master_port=1234 $cl_path --b 128 -t -v $mscoco_path
How do I get $myrank env variable in order to provide it to --node_rank as stipulated in the documentation 10?
Thanks! |
st177478 | Hey @dariodematties, is it possible to know which qsub node (node1 or node2) it is on when running Script.sh? If it is possible to get that, the node rank for the launch script can be derived from node id accordingly?
cc @Kiuk_Chung, just in case you have seen use cases like this before. |
st177479 | @mrshenli IIUC singularity’s qsub command is invoked from the submitting process. @dariodematties I’m assuming you are using dist.init_process_group(backend="mpi") inside $mscoco_path?
Since you are already launching with mpiexec there is no need to wrap your script with the distributed launcher. See: https://github.com/pytorch/pytorch/blob/49f0e5dfeb64a928c6a2368dd5f86573b07d20fb/torch/distributed/distributed_c10d.py#L446 13
The rank and world size information is provided by mpi runtime. |
st177480 | Thanks @mrshenli, I have tried to get the node id from Script.sh, but I have not been able to do that so far |
st177481 | Hi @Kiuk_Chung, thank you for your help. I think what you say makes a lot of sense
I tried what you suggest
this is the mpiexec line in Script.sh
mpiexec -n 4 -f $COBALT_NODEFILE -ppn 2 singularity exec --nv -B $mscoco_path $centos_path python3.8 $cl_path --b 128 -t -v $mscoco_path
I try to launch 4 processes in two nodes (-ppn stands for processes per node)
mscoco_path is a path for the container
centos_path is the path of the container
cl_path is the path to the python script I want to run
what follows are options
I also used dist.init_process_group(backend="mpi") as you suggested
Yet, it is not working
As I can see in the output it launch the container but never launches the python script |
st177482 | @dariodematties In mpiexec -n 4 -f $COBALT_NODEFILE -ppn 2 what does the -n 4 and -ppn 2 do? My understanding was that mpiexec -n 4 -f $COBALT_NODEFILE is going to invoke 4 procs on each host listed by $COBALT_NODEFILE. |
st177483 | Than you very much for your response @Kiuk_Chung
-n 4 means 4 processes total and -ppn 2 means 2 processes per node
I have been using this extensively in mpi+omp hybrid applications for C and C++ code and this is the behavior
mpiexec distributes the 4 processes in the nodes you asked for from qsub using -ppn |
st177484 | apologies for the late reply, circling back on this, were you able to get it working? |
st177485 | All good @Kiuk_Chung
We all are very busy in fact
I have not been able to solve it
Running on a single node I realized that it was not necessary to use mpiexec in front of torch.distributed.launch
The correct command to use in the script is the following
singularity exec --nv -B $imagenet_path $centos_path python3.8 -m torch.distributed.launch --nproc_per_node=2 $re_path --b 256 -f 2 -v $best_model_path $imagenet_path
Here I specify 2 processes per node since this node have two GPUs (--nproc_per_node=2)
I launch the scrip using the following line in the command line
qsub -n 1 -t 720 -A my-project-name ./run_Active_SimCLR.sh
Here I am asking for a single node (-n 1) for 12 hours (-t 720)
Remember that I have this running inside a container
As the documentation 4 points out, when I want to run the same code in two nodes I use the following
singularity exec --nv -B $mscoco_path $centos_path python3.8 -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr="192.168.1.1" --master_p ort=1234 $cl_path --b 128 -t -v $mscoco_path &
singularity exec --nv -B $mscoco_path $centos_path python3.8 -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --node_rank=1 --master_addr="192.168.1.1" --master_p ort=1234 $cl_path --b 128 -t -v $mscoco_path
The new options here that the documentation 4 specifies are the number of nodes (--nnodes=2) and the node id (--node_rank=0)
And I launch it by
qsub -n 2 -t 720 -A my-project-name ./run_Active_SimCLR.sh
Where I ask for 2 nodes during 12 hours
Using print statements I realized that the code gets stuck at this line
torch.distributed.init_process_group(backend='gloo', init_method='env://')
When I check the nodes using qstat they are still listed as running but they are like idle |
st177486 | Hey,
I am new to torch and multi-GPU usage. I went through the Tutorial 2 and I am confused by the usage of model.to(device) in the multi-GPU case. Removing some intermediate lines of code, we are left with something like the this:
import torch
import torch.nn as nn
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
all_available_devices = [0,1,2,3]
model = some_NN()
model = nn.DataParallel(model, device_ids=all_available_devices)
model.to(device)
. Now, this last line of code is confusing me. Maybe my understanding of model.to() is not correct. In the single-GPU case model.to(device) allocates the model, gradients, and feature maps (in case of a CNN) to our device, correct? Now what does this line do in the multi-GPU case, where model is saved on all_available_devices?
In addition, in the single-GPU case I allocate the e.g. training data on a specific device (i.e. data.to(device)). Can this impede the data flow that is handled by DataParallel() behind the curtains?
Cheers |
st177487 | Solved by PistonY in post #2
nn.DataParallel will “copy” model to multi-gpu automatically, .to(device) will load model to main device. |
st177488 | nn.DataParallel will “copy” model to multi-gpu automatically, .to(device) will load model to main device. |
st177489 | Thank you, and just to be clear: device from the above example would be GPU-1 in the graphic of this post 1?
And about the 2nd question: Following the graphic of the same post, is it also the fastest approach to allocate the data on the same device as the model? |
st177490 | I am trying to implement a parallel evaluation of a function on different sections of my data within the forward() in my model. I am not even sure if this is possible.
I saw that there is a torch.multiprocessing.Pool that I can use to map a function to a list of tensors, but when used within my forward method, it complains because I cannot use pool objects within a class (apparently):
NotImplementedError: pool objects cannot be passed between processes or pickled.
Here more or less what I would like to try:
def forward(self,x):
x = nf.unfold(x) #unfold the e.g. image to get patches
x = evaluate_function_in_parallel(x) # parallelize this evaluation e.g. x = pool.map(function,x)
x = torch.cat(x)
return x
I have only seen examples of distributed training with torch.multiprocessing and torch.distributed but not examples for distributing the work within the forward function. Is it even possible? If so, are there any examples available?
Any comment on this would be really helpful. Thanks. |
st177491 | @Juans Are you trying to use multiprocessing to parallelize processing in the forward pass on a single node, or are you trying to do this in a distributed setting?
Is there any more info you can provide about the function you are trying to parallelize in the forward pass? A search indicated that this error is thrown when you attempt to pickle the pool object itself (say the function you are trying to parallelize results in pickling the pool).
I am looking for some examples of this behavior, but the recommended method would be to parallelize the entire training iteration (the entire forward pass and backward pass on a single batch) using DDP instead of just parallelizing one part of the fwd pass. |
st177492 | Hi @osalpekar, thank you for your response. I am looking to parallelize processing in the forward pass with the workers within a single node.
My bottleneck is not the batch processing (choosing different batch sizes has little effect on the time spent in the forward pass). Rather, most of the time is spent on the part of the forward where I have a for loop, which I want to parallelize.
Is there any more info you can provide about the function you are trying to parallelize in the forward pass?
For example, when x is a tensor with an MNIST batch, and f_i are arbitrary functions with distinct learnable parameters. The f_i’s take as input a tensor of size (N,M) --> (N,) and in the forward, we have:
x=torch.unfold(x,kernel_size=2,stride=2) #would give for MNIST size [N,196,4]
x=torch.cat([f_i(x[..., i]) for i in range(x.shape[-1])], dim=1) # is very slow!
Here, if I process 1 image or many more in a batch makes little difference. Every function f_i processes the whole batch, but only a part of the image. You could, I guess, think of it as the f_i’s being local receptive fields and we can parallelize the processing of their activations. Then, x passes to other modules in the model.
The closest I have seen to my question is this 1 and this 2. However, the former seems to be a parallelization on the data, while in the latter, more similar to what I intend, there is no solution.
A search indicated that this error is thrown when you attempt to pickle the pool object itself (say the function you are trying to parallelize results in pickling the pool).
I have tried this 2, and this seems to avoid pickling the pool object, but the program just stops responding at some point, so I guess this is not possible.
Sorry for the long answer. Any comment would be very appreciated! |
st177493 | @Juans Thanks for clarifying your use case.
It sounds like multiprocessing.Pool.map (or something similar) would allow you to map a function like the f_i in your example on some chunk of the input tensor to a process pool that you can define. The exact chunksize can be configured using args like chunksize or a related operation called imap. Will this serve your purpose?
I tried the following (very simplistic) example using Pool.map in the model’s forward pass and it seems to work:
import torch
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
class DemoMultiProcModel(nn.Module):
def __init__(self):
super(DemoMultiProcModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
with mp.Pool(5) as p:
self.result = p.map(sum, x)
return self.net2(self.relu(self.net1(x)))
def train():
print(f"Running Model with parallelized forward pass.")
model = DemoMultiProcModel()
loss_fn = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
for batch_idx in range(5):
# Generate random inputs/labels
inputs = torch.randn(20, 10)
labels = torch.randn(20, 5)
# Train
optimizer.zero_grad()
outputs = model(inputs)
loss_fn(outputs, labels).backward()
optimizer.step()
print(f"Batch {batch_idx} done.") |
st177494 | @osalpekar Thanks a lot for your suggestion.
I had considered torch.multiprocessing.Pool.map since, as you correctly pointed out, seemed to be the solution.
Unfortunately, I see two issues with this (please find the adapted code below):
First: somehow pool.map seems to take quite some time (100x more than using a for-loop in this case, roughly 0.3 sec, which is around 3x longer than in my original problem). Do you know of any problems with this method? It seems strange, if you compare it with the built-in map function. Could it be some sort of overhead? seems quite high.
Second: Indeed your suggestion works, I made some small changes, but it doesn’t raise any errors, or freezes. However, this solution does not translate to the case where you have registered parameters in the functions f_i. It throws following error:
multiprocessing.pool.MaybeEncodingError: Error sending result
Reason: 'RuntimeError('Cowardly refusing to serialize non-leaf tensor which
requires_grad, since autograd does not support crossing process boundaries.
If you just want to transfer the data, call detach() on the tensor before
serializing (e.g., putting it on the queue).')'
So it is an issue with autograd. I have read this error already here 2 and since, it is relatively new to have distributed autograd, I’d need to dive into the docs. Any pointers are welcome!
I tried it in version 1.6.0 and 1.7.0.
Adapted code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
import time
class DemoMultiProcModel(nn.Module):
def __init__(self):
super(DemoMultiProcModel, self).__init__()
self.net1 = nn.Linear(196, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
self.kernel = nn.Linear(4, 1) #let's try to make a feature map
def func_to_apply(self, data):
return self.kernel(data)
# return torch.sum(data, 1)[None]
def forward(self, x):
start = time.time()
with mp.Pool(5) as p:
result = p.map(self.func_to_apply, x)
# result = []
# for x_i in x:
# result.append(self.func_to_apply(x_i))
print(f'Time it takes whole op: {time.time()-start} s')
x = torch.cat(result, dim=0)
return self.net2(self.relu(self.net1(x)))
def train():
print(f"Running Model with parallelized forward pass.")
model = DemoMultiProcModel()
loss_fn = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
for batch_idx in range(5):
# Generate random inputs/labels
inputs = torch.randn(256, 196, 4)
labels = torch.randn(256, 5)
# Train
optimizer.zero_grad()
outputs = model(inputs)
loss_fn(outputs, labels).backward()
optimizer.step()
print(f"Batch {batch_idx} done.")
if __name__ == '__main__':
train() |
st177495 | I see, I can confirm repro’ing this error case. It seems like autograd cannot handle computing gradients on tensors that have been created by operations that involve cross-process communication, so it simply refuses to serialize them for IPC in the fwd pass itself.
@pritamdamania87 Is our assessment here correct? Is this use case supported with Distributed Autograd/some other alternative? I’m guessing one way of doing this would be to send RPC’s in the fwd pass to other processes to perform the func_to_apply on some chunk of data, and then collect the results (and distributed autograd would do the reverse in the bwd pass), but not sure if this is feasible/the best approach. |
st177496 | @osalpekar I can try the rpc approach and report it here.
I guess this use case is not so common (although I saw some similar ones in the forum already, as I mentioned above), so I hope it works.
Any further comments are of course welcome! |
st177497 | Just spoke about your use case with a few other folks, and the RPC/Distributed Autograd-based mechanism I described above should work.
Alternately, if you can Torchscript your model (see docs here 4), you may be able to use torch.jit.fork 6 to get this multiprocessing-style parallelism. There are a number of other performance benefits of using Torchscript (such as bypassing the Python Global Interpreter Lock) as well. |
st177498 | This is great news! Thanks a lot for your help.
I’ll have a look into it. Hopefully I can come up with a solution to post here. Maybe it is also useful to others. |
st177499 | Sometimes OOM errors occur, and typically a way I do to handle this is the following:
for data, iter_idx in zip(data_loader, range(start_iter, total_iter)):
try:
iteration_output = _do_iteration(...)
output = iteration_output.output_image
loss_dict = iteration_output.data_dict
except RuntimeError as e:
# Maybe string can change
if "out of memory" in str(e):
if fail_counter == 3:
raise TrainingException(
f"OOM, could not recover after 3 tries: {e}."
)
fail_counter += 1
logger.info(
f"OOM Error: {e}. Skipping batch. Retry {fail_counter}/3."
)
optimizer.zero_grad()
gc.collect()
torch.cuda.empty_cache()
continue
logger.info(f"Cannot recover from exception {e}. Exiting.")
raise RuntimeError(e)
fail_counter = 0
Note: While parsing the error string is suboptimal, it does not appear there is an alternative (I opened an issue about that in GitHub: https://github.com/pytorch/pytorch/issues/48365 1).
This above works well in the forward pass, but if the error occurs somewhere in the backward pass, and you use DistributedDataParallel, you get an exception such as:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Likely some parameters have already a computed derivative and then the model ran out of memory somewhere in the backward pass. Since this is very tricky to debug, I would appreciate any pointers where to look. |
st177500 | Thanks for filing the GitHub issue! Just to confirm the scenario you are seeing, sometimes you see an OOM in the fwd pass (which is handled by your try-catch block), whereas an OOM in the bwd pass results in the RuntimeError you posted.
Are you able to complete an iteration of training without seeing an OOM? If not, the runtime error may actually be due to some value returned by the fwd function that’s not used in the loss computation.
Regarding future debugging, first here 1 is another thread about why the torch.cuda.empty_cache() function is not recommended. I can think of the following ways to get around the OOM issue in a more robust way:
Use Model Parallelism. For CPU-based models you can check out the RPC framework (We are working on robust GPU support for the RPC framework). Otherwise you can split the model manually call the forward functions on each shard and move activations around using .to(). Here is a recent question about this.
Try reducing the batch size
Use an optimizer that needs to store less local state (SGD vs. Adam) |
st177501 | @osalpekar Thanks for your reply.
The above code indeed works when an error occurs during the forward pass, skipping a batch and happily continuing, but when the error occurs in the backward pass I get the exception above (tested by increasing batch size until a OOM occurs during in the forward pass).
In general, it works well, and I do not yet need to use model parallelization, I do get an OOM error (about once a day). To get a bit of an idea, this is for my MRI reconstruction project 1, where I can use 40GB of memory, the model is typically around ±34GB (batch size 1 per GPU), but I can get an OOM in the backward pass.
Not particularly sure why and how this happens, but seems to be rather deep in the pytorch internals, and checking how they solve it in e.g. detectron2 it seems like a pragmatic way to solve it this way.
Blatantly ignoring the above exception and continue with the next batch just freezes the training by the way, so there should be something I need to reset, |
st177502 | @jteuwen I see, thanks for the added context!
It indeed seems like workflow may be OOM-prone given that a 34GB model, corresponding gradients and optimizer states, and an input sample must fit into 40GB. Is my understanding of these memory sizes correct? There has been some work on DDP to reduce the memory overhead, perhaps @mrshenli may be able to shed some more light on that.
As an aside, torchelastic is a great way of recovering from errors and restarting training from a checkpoint. I’m curious whether it will result in the GPU tensors being freed (which could replace the failure recovery script shared above) cc @Kiuk_Chung |
st177503 | There has been some work on DDP to reduce the memory overhead, perhaps @mrshenli may be able to shed some more light on that.
Thanks @osalpekar. The feature is gradient_as_bucket_view arg in DDP ctor. It will modify param.grad field and let it point to DDP communication bucket views. So that can save one copy of the model. This is still a prototype feature and subject to changes.
pytorch.org
DistributedDataParallel — PyTorch 1.7.0 documentation 8
+1 to @osalpekar’s comment that torchelastic 8 is the recommended solution for DDP OOMs. The RuntimeError you saw is caused by desync, and the desync is caused by OOM in one process. Because DDP expects all processes to launch the same number of AllReduce comm ops in every iteration. If one process hit OOM and skip/redo some comm ops, it will break this assumption. TorchElastic handles this problem by checkpointing model states and let the entire DDP gang to revert to the previous checkpoint when it detects failures in any process, which seems to be very nice fit for this infrequent and unpredictable OOM. |
st177504 | pytorch.org
Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.7.0... 3
I have read the tutorial and I run the demo_basic with 2 visible gpus. In the function, I output the device of labels and outputs, and they are on gpus, while the inputs of torch.randn(20, 10) is on cpu. The tutorial is a little confused that the backend is gloo, but the main function tries to use gpus.
My question is that why don’t we need to transfer the inputs to gpus ?
Also, the tutorial might be more practical if dummy datasets and dataloaders are provided.
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup() |
st177505 | My question is that why don’t we need to transfer the inputs to gpus ?
Thanks for pointing this out. Yep, it’s a good practice to also move the inputs to the destination GPU. The reason that the tutorial code didn’t fail is because DDP will recursively detect tensors in the inputs and move them to the target device. See the code below:
github.com
pytorch/pytorch/blob/7df84452423f44ebe1db40a2e3463066bf954f95/torch/nn/parallel/distributed.py#L683-L684 1
inputs, kwargs = self.to_kwargs(inputs, kwargs, self.device_ids[0])output = self.module(*inputs[0], **kwargs[0]) |
st177506 | Thanks for the quick reply. My further question is that, what is the number of self.device_ids, when I use 2 gpus?
if len(self.device_ids) == 1:
inputs, kwargs = self.to_kwargs(inputs, kwargs, self.device_ids[0])
output = self.module(*inputs[0], **kwargs[0])
else:
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
outputs = self.parallel_apply(self._module_copies[:len(inputs)], inputs, kwargs)
output = self.gather(outputs, self.output_device) |
st177507 | @Frazer You do not need to change the device_ids argument passed to DDP in the distributed tutorial even if you use 2 GPUs. The demo_basic function is run on each spawned process, so each process ends up with a replica of ToyModel wrapped by DDP. |
st177508 | @osalpekar Yeah, I know and I don’t want to change device_ids. I just want to know the flow of the program. I guess you mean len(self.device_ids) == 1 with 2 gpus. So when will the else condition be triggered ? |
st177509 | Ah I see, thanks for the clarification! The else condition will be triggered when you pass a list of multiple ranks as the device_ids arg to the DDP constructor. This basically allows you to specify which CUDA devices model replicas will be placed on (docs here). |
st177510 | +1 to @osalpekar’s comment.
One thing I want to add is that, when DDP was initially introduced, it has two modes:
single-process multi-device (SPMD): each process exclusively works on multiple GPUs, and hence there will be multiple model replicas within the same process. In this case, the device_ids should be a list of GPUs one process should use.
single-process single-device (SPSD): each process exclusively works on one GPU, i.e., each process works on a single model replica. In this case, device_ids should only contain a single device.
As SPSD is almost always the recommended way to use DDP due to perf reasons, we are planning to retire SPMD mode soon. If there are concerns, please comment here: https://github.com/pytorch/pytorch/issues/47012 1 |
st177511 | Hi,
I have two servers with 3 GPUs each. I can run my code when I use all GPUs on servers (6 GPUs). I want to make benchmark by using 2 GPUs on each (4 GPUs) and 1 GPUs on each server (2 GPUs).
ngpus_per_node = 1 # or can be 2 or 3
args.world_size = ngpus_per_node * args.world_size # 2 (for 2 machine) is sent to for world_size
when I use all GPUs on each machine it works fine, but by less than it the code stuck in following line without any error:
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[gpu]) |
st177512 | Solved by PistonY in post #2
You could use os to set which device python could use.
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1' |
st177513 | You could use os to set which device python could use.
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1' |
st177514 | Or you can set environment variables before running the code.
CUDA_VISIBLE_DEVICES=0,1 python train.py
Set CUDA_VISIBLE_DEVICES to the gpu index you want to use. |
st177515 | Hello, I get the error RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable when I try to run my code on one server A with 2 GPUs while the code runs fine on another server B. What could be causing this issue? Could this be related to the CUDA Version? On the server A where the code fails the cuda version is 10.1 while the server B where the code runs has cuda version 11.
The full error (the process freezes after this output):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/cluster/home/klugh/software/anaconda/envs/temp/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/cluster/home/klugh/software/anaconda/envs/temp/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/cluster/home/klugh/software/anaconda/envs/temp/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 109, in rebuild_cuda_tensor
storage = storage_cls._new_shared_cuda(
RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable |
st177516 | Are you using a shared cluster/machine by any chance? The GPU may not be available if another application/user has taken control of it. You can check current GPU usage using the nvidia-smi command. |
st177517 | Hi @rvarm1, thanks for the answer! I am indeed using a shared cluster, but when I run nvidia-smi the GPUs seem to be free:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.39 Driver Version: 418.39 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... Off | 00000000:89:00.0 Off | 0 |
| N/A 37C P0 44W / 300W | 0MiB / 32480MiB | 0% E. Process |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... Off | 00000000:8A:00.0 Off | 0 |
| N/A 35C P0 43W / 300W | 0MiB / 32480MiB | 0% E. Process |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+ |
st177518 | Based on the output of nvidia-smi, it seems the GPUs are in EXCLUSIVE Process mode, which would allow only a single context.
nvidia-smi -i 0 -c 0
nvidia-smi -i 1 -c 0
# or for both directly
nvidia-smi -c 0
should reset both GPUs to the default mode again. |
st177519 | @ptrblck could you expand on what exclusive process vs shared mode entails? As far as I understand it’s a common practice in compute clusters to have the GPU set up in “exclusive process” mode and that’s not changeable by a user.
How does pytorch work when doing distributed work as opposed to the regular case? |
st177520 | The exclusive mode might be the right choice for your compute cluster and you can stick to it, if it’s working.
However, I would not recommend it as the default mode, if you are unsure about its limitations (single context creation) and are using your local workstation.
blmt:
How does pytorch work when doing distributed work as opposed to the regular case?
The recommended approach is to use DistributedDataParallel, with a single process per GPU.
Each device would thus create an own context. |
st177521 | Hi everybody,
I want to send a tensor from a server side to a client side and receive it then start to do some computations there. I have defined worker and server as following:
Server side:
from syft.workers.websocket_client import WebsocketClientWorker
alice = WebsocketClientWorker( related arguments)
print(type(alice))
x = torch.tensor([1,3,5])
x_ptr = x.send(alice)
print(x_ptr)
print(x)
Client side:
from syft.workers import websocket_server
server = websocket_server.WebsocketServerWorker( id=id, host=host, port=port, hook=hook, verbose=verbose)
server.start()
print('server.list_objects():', server.list_objects())
after running from client side then server side I got the following results from server side:
<class ‘syft.workers.websocket_client.WebsocketClientWorker’>
(Wrapper)>[PointerTensor | me:56794581876 -> Alice:90382785850]
tensor([1, 3, 5])
then from client side I did not get anything before I stopped execution which I got:
server.list_objects(): {90382785850: tensor([1, 3, 5])}
I want to have the tensor in client side I know syft.workers.websocket_server. WebsocketServerWorker has _recv_msg function but I do not know haw I can use it.
Thank you in advance. |
st177522 | Hey @Nazila-H, is data privacy and security a hard requirement?
If no, torch.distributed.rpc is also an option.
API: https://pytorch.org/docs/master/rpc.html
tutorial: https://pytorch.org/tutorials/beginner/dist_overview.html 3
If yes, we will need some help from the PySyft experts.
@ptrblck do you know who is familiar with PySyft? Thx! |
st177523 | @Tudor_Cebere is currently working on PySyft, if I’m not mistaken, so he might help. |
st177524 | Thank you @ptrblck for pointing out this issue!
Hello @Nazila-H,
As a small disclaimer, you are using syft_0.2.X, which currently is not under active development. If you want to get familiar with the syft ecosystem I would recommend:
Joining the OpenMined slack and asking questions there, you will really fast responses.
Try the new 0.3.0 version.
If 0.2.X is a hard requirement, please create an issue here:
PySyft 6
And add the 0.2.X tag, the community/people that are still working to fix issues on 0.2.X will help you as soon as possible.
Thank you,
Tudor Cebere |
st177525 | Hey @mrshenli, thank you for your answer.
Yes, I need to preserve the privacy so I am working with PySyft 0.2.4 and it seems my problems are because of the version. |
st177526 | Hello @Tudor_Cebere.
Thank you for your explanation.
Yes, I am working with syft v.02.
I will ask my question in OpenMined slack.
Best,
Nazila |
st177527 | I am training a model using DistributedDataParallel(code snippet below). My model is initialized using nn.ModuleList(). But once the input which is on GPU passes through one of the blocks in nn.ModuleList, it switches to CPU mode.
if use_cuda and torch.cuda.device_count()>1:
model = model.to(rank)
model = DistributedDataParallel(model, device_ids=[rank])
Please refer to forward method of Block class which creates a list of SimpleNet class. Please let me know if you require additional piece of code.
class SimpleNet(nn.Module):
def __init__(self, inp, parity):
super(SimpleNet, self).__init__()
self.net = nn.Sequential(
nn.Linear(inp//2, 256),
nn.LeakyReLU(True),
nn.Linear(256, 256),
nn.LeakyReLU(True),
nn.Linear(256, inp//2),
nn.Sigmoid(),
nn.BatchNorm1d(392)
)
self.inp = inp
self.parity = parity
def forward(self, x):
z = torch.zeros(x.size())
x0, x1 = x[:, ::2], x[:, 1::2]
if self.parity % 2:
x0, x1 = x1, x0
# print("X: ", x0[0][0].detach(), x1[0][0].detach())
z1 = x1
log_s = self.net(x1)
# print(x.size(), x1.size(), log_s.size())
t = self.net(x1)
s = torch.exp(log_s)
z0 = (s * x0) + t
# print("Z: ", z0[0][0].detach(), z1[0][0].detach())
if self.parity%2:
z0, z1 = z1, z0
z[:, ::2] = z0
z[:, 1::2] = z1
logdet = torch.sum(torch.log(s), dim = 1)
return z, logdet
def reverse(self, z):
x = torch.zeros(z.size())
z0, z1 = z[:, ::2], z[:, 1::2]
if self.parity%2:
z0, z1 = z1, z0
# print("Z: ", z0[0][0].detach(), z1[0][0].detach())
x1 = z1
log_s = self.net(z1)
t = self.net(z1)
s = torch.exp(log_s)
x0 = (z0 - t)/s
# print("X: ", x0[0][0].detach(), x1[0][0].detach())
if self.parity%2:
x0, x1 = x1, x0
x[:, ::2] = x0
x[:, 1::2] = x1
return x
class Block(nn.Module):
def __init__(self, inp, n_blocks):
super(Block, self).__init__()
parity = 0
blocks = []
for _ in range(n_blocks):
blocks.append(SimpleNet(inp, parity))
parity += 1
self.blocks = nn.ModuleList(blocks)
def forward(self, x):
logdet = 0
out = x
xs = [out]
# print("*"*20, "FORWARD", "*"*30)
for block in self.blocks:
print("device in block: ", out.is_cuda) # True
out, det = block(out)
print("device in block: ", out.is_cuda) # False
logdet += det
xs.append(out)
return out, logdet
def reverse(self, z):
# print("*"*20, "REVERSE", "*"*30)
out = z
for block in self.blocks[::-1]:
out = block.reverse(out)
return out |
st177528 | Solved by osalpekar in post #2
In the SimpleNet forward() function, the zeros tensor z is being created on CPU, which is why the out tensor returned by the forward function of that block returns out.is_cuda = False. You must either place the z tensor explicitly on the correct rank or make it an nn.Parameter or similar type that w… |
st177529 | In the SimpleNet forward() function, the zeros tensor z is being created on CPU, which is why the out tensor returned by the forward function of that block returns out.is_cuda = False. You must either place the z tensor explicitly on the correct rank or make it an nn.Parameter or similar type that will be moved to GPU when the entire module is placed on GPU. |
st177530 | Thank you for your response. I was able to make it work using the nn.Parameter and register_parameter but I noticed that if I just initialize the z tensor like z = torch.zeros_like(x) instead of torch.zeros(x.size()), it is automatically loaded to the same device as x. |
st177531 | While trying to start a multi-gpu process;
store = TCPStore(result.hostname, result.port, world_size, start_daemon)
RuntimeError: Permission denied
Has anyone met with this before? |
st177532 | I have met this problem minutes ago and i slove this problem after i change this code from os.environ['MASTER_PORT'] = '80' to os.environ['MASTER_PORT'] = '9901' |
st177533 | Is there a way to enable distributed inference, instead of training? Also, is it possible to distribute the work across multiple servers each with multiple GPUs, or does it only work for a single server with multiple GPU? If any of these features are missing, will they be coming out soon?
Lastly, what would be the recommended environment / library to enable distributed inference on multiple servers each with multiple GPUs?
Thanks! |
st177534 | Hi,
For single server, you can use nn.DataParallel 146.
For multiple servers, the distributed 224 package should have everything you need. |
st177535 | I don’t get it. Everything in the distributed docs relates to training.
And DataParallel can be used for inference, sure, but for production it has little use if requests come at random times. |
st177536 | So far I have only used a singler-server multi-GPU environment but in principle, DDP can be used at inference time, too.
What hinders using DDP at inference are the
synchronization at backward
DistributedSampler that modifies the dataloader so that the number of samples are evenly divisible by the number of GPUs.
At inference, you don’t need backward computation and you don’t want to modify the evaluation data.
You can use a custom dataloader for evaluation similarly this example 94 to avoid the problems.
A related thread is here 78. |
st177537 | For those looking for a production inference service that allows for serving requests on models in parallel, you can check out TorchServe 147. |
st177538 | This paper (SimCLR) introduced a self-supervised learning method. In this method, the InfoNCE loss is computed in a batch level with the feature similarities between different inputs. The paper also point out that a bigger batch size makes a better performance.
For example, I have a 8GPU machine and I can put 128 images on each GPU. If I use the trivial implementation, I will get eight 128128 similarity metrics. How could I get one 10241024 similarity matrix? I was thinking about use all-reduce, but I am not sure if the gradient can be passed to all GPUs. Any methods to implement it? Thanks!
Edit: to make it more clear, suppose we have B images and the feature extracted from the model is x1, x2, …, xB, the loss function takes the pairwise dot product similarity as inputs. Now I can only compute the pairwise similarity (128 times 128) on each GPU, sum up the loss from 8 GPUs and do backward. I hope to compute the pairwise similarity (1024 times 1024) and directly do backward. How can we do this? Thanks! |
st177539 | @KaiHoo What is the operation you want to perform across these 8 128x128 matrices? Allreduce will ensure that each GPU ends up with the average (or sum) of the matrices on all the GPUs. There are numerous such collective operations that you can perform to communicate data across GPUs in the distributed package that may be useful (docs here: https://pytorch.org/docs/stable/distributed.html#synchronous-and-asynchronous-collective-operations 4) |
st177540 | Hello, I made my question more clear. I know the Allreduce op, but I am not sure if the gradient could pass this op? Thanks! |
st177541 | Yes, the gradients can be passed to collective operations. You can access the gradient tensors by checking the .grad field of each of the parameters. |
st177542 | Hello,
Do you guys have a C++ example similar to the python sample here:
https://pytorch.org/tutorials/intermediate/dist_tuto.html 53
From looking at the source code, I only see python support in source code; for example here …\torch\csrc\multiprocessing\init.cpp
Thank you. |
st177543 | Do you mean an example of distributed training using the C++ frontend? We don’t have one combining the two unfortunately. Also, there is not yet a torch.nn.parallel.DistributedDataParallel equivalent for the C++ frontend. That said, it is possible to use the distributed primitives from C++. See torch/lib/c10d for the source code. |
st177544 | Yep, that’s what I meant. I will take a look at torch/lib/c10d and try to build one myself.
Thanks for the reply. |
st177545 | @kais I am looking for an example for distributed training with C++ frontend. If you managed to build one, can you please share? |
st177546 | I managed to implement a few examples using Libtorch and MPI to help others in the community. Check https://github.com/soumyadipghosh/eventgrad 71 |
st177547 | @soumyadipghosh Thanks for contributing this to the community and for the C++/MPI example PR!
Just as a general note for this thread, using the c10d APIs will enable distributed data parallel training that will produce the same results as DDP. However, calling allreduce after the backward pass to synchronize gradients will likely lag in performance as compared to DDP, which overlaps computation and communication by synchronizing smaller buckets of gradients during the backward pass. |
st177548 | Hi everyone,
I am new in PyTorch, I want to get just one batch from DataLoader and have the index of samples as well. and every time I want just to use these index ids (without changing ids). any idea?
Thank you in advance. |
st177549 | @Nazila-H Is your question in the context of using DataLoader specifically in a distributed training setting, or is it about more general DataLoader behavior? If it’s the latter, there may be folks with more insight into this area if you post in the Uncategorized topic. |
st177550 | Yes, it is in distributed framework.
I want to have the same indexes of the samples in three different virtual machines, can I modify shuffle in: DataLoader.torch.utils.data or define a fix number for a kind of random seed in each machine? |
st177551 | Error message:
Expected tensor for ‘out’ to have the same device as tensor for argument #2 ‘mat1’; but device 1 does not equal 0 (while checki
ng arguments for addmm)
I understand this error has been discussed quite a lot and after reading several posts I had a basic idea of why this occurs on my code. Mainly because I am using a very complex model.
My code works fine on single-GPU mode, after adding torch.nn.DataParallel, I tried to run on a 4-GPU node, the error occurred. Can someone kindly have a look at my model and point out where to modify please?
CUDA Setting:
os.environ["CUDA_VISIBLE_DEVICES"]= '0,1,2,3'
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
Input variable CUDA setting:
PE_batch = get_pe(seq_lens, seq_len).float().to(device)
seq_embedding_batch = torch.Tensor(seq_embeddings.float()).to(device)
state_pad = torch.zeros([matrix_reps_batch.shape[0],seq_len, seq_len]).to(device)
Model instantiation and application:
contact_net = ContactAttention_simple_fix_PE(d=d, L=seq_len, device=device).to(device)
contact_net = torch.nn.DataParallel(contact_net)
output = contact_net(PE_batch,seq_embedding_batch, state_pad)
Models details (problem should be here, I am using a subcalss of a class from nn.Module,so there are two model classes. During my debug, I have added .to(self.device) to every operation in forward() in case I miss any of the layers):
class ContactAttention_simple(nn.Module):
def __init__(self, d,L):
super(ContactAttention_simple, self).__init__()
self.d = d
self.L = L
self.conv1d1= nn.Conv1d(in_channels=4, out_channels=d,
kernel_size=9, padding=8, dilation=2)
self.bn1 = nn.BatchNorm1d(d)
self.conv_test_1 = nn.Conv2d(in_channels=6*d, out_channels=d, kernel_size=1)
self.bn_conv_1 = nn.BatchNorm2d(d)
self.conv_test_2 = nn.Conv2d(in_channels=d, out_channels=d, kernel_size=1)
self.bn_conv_2 = nn.BatchNorm2d(d)
self.conv_test_3 = nn.Conv2d(in_channels=d, out_channels=1, kernel_size=1)
self.position_embedding_1d = nn.Parameter(
torch.randn(1, d, 600)
)
self.encoder_layer = nn.TransformerEncoderLayer(2*d, 2)
self.transformer_encoder = nn.TransformerEncoder(self.encoder_layer, 3)
def forward(self, prior, seq, state):
position_embeds = self.position_embedding_1d.repeat(seq.shape[0],1,1)
seq = seq.permute(0, 2, 1) # 4*L
seq = F.relu(self.bn1(self.conv1d1(seq))) #d*L just for increase the capacity
seq = torch.cat([seq, position_embeds], 1) # 2d*L
seq = self.transformer_encoder(seq.permute(-1, 0, 1))
seq = seq.permute(1, 2, 0)
seq_mat = self.matrix_rep(seq) # 4d*L*L
p_mat = self.matrix_rep(position_embeds) # 2d*L*L
infor = torch.cat([seq_mat, p_mat], 1) # 6d*L*L
contact = F.relu(self.bn_conv_1(self.conv_test_1(infor)))
contact = F.relu(self.bn_conv_2(self.conv_test_2(contact)))
contact = self.conv_test_3(contact)
contact = contact.view(-1, self.L, self.L)
contact = (contact+torch.transpose(contact, -1, -2))/2
return contact.view(-1, self.L, self.L)
def matrix_rep(self, x):
x = x.permute(0, 2, 1) # L*d
L = x.shape[1]
x2 = x
x = x.unsqueeze(1)
x2 = x2.unsqueeze(2)
x = x.repeat(1, L,1,1)
x2 = x2.repeat(1, 1, L,1)
mat = torch.cat([x,x2],-1) # L*L*2d
mat_tril = torch.tril(mat.permute(0, -1, 1, 2)) # 2d*L*L
mat_diag = mat_tril - torch.tril(mat.permute(0, -1, 1, 2), diagonal=-1)
mat = mat_tril + torch.transpose(mat_tril, -2, -1) - mat_diag
return mat
class ContactAttention_simple_fix_PE(ContactAttention_simple):
def __init__(self, d, L, device):
super(ContactAttention_simple_fix_PE, self).__init__(d, L)
self.device=device
self.PE_net = nn.Sequential(
nn.Linear(111,5*d),
nn.ReLU(),
nn.Linear(5*d,5*d),
nn.ReLU(),
nn.Linear(5*d,d))
def forward(self, pe, seq, state):
position_embeds = self.PE_net(pe.view(-1, 111).to(self.device)).view(-1, self.L, self.d).to(self.device) # N*L*111 -> N*L*d
position_embeds = position_embeds.permute(0, 2, 1).to(self.device) # N*d*L
seq = seq.permute(0, 2, 1).to(self.device) # 4*L
seq = F.relu(self.bn1(self.conv1d1(seq))).to(self.device) #d*L just for increase the capacity
seq = torch.cat([seq, position_embeds], 1).to(self.device) # 2d*L
seq = self.transformer_encoder(seq.permute(-1, 0, 1).to(self.device)).to(self.device)
seq = seq.permute(1, 2, 0).to(self.device)
seq_mat = self.matrix_rep(seq).to(self.device) # 4d*L*L
p_mat = self.matrix_rep(position_embeds).to(self.device) # 2d*L*L
infor = torch.cat([seq_mat, p_mat], 1).to(self.device) # 6d*L*L
contact = F.relu(self.bn_conv_1(self.conv_test_1(infor))).to(self.device)
contact = F.relu(self.bn_conv_2(self.conv_test_2(contact))).to(self.device)
contact = self.conv_test_3(contact).to(self.device)
contact = contact.view(-1, self.L, self.L).to(self.device)
contact = ((contact.to(self.device)+torch.transpose(contact, -1, -2).to(self.device))/2).to(self.device)
return contact.view(-1, self.L, self.L).to(self.device) |
st177552 | Solved by ptrblck in post #16
It seems the error is raised by rewrapping the model into nn.DataParallel in each iteration.
Move contact_net = torch.nn.DataParallel(contact_net) before the epoch loop and it should work.
I don’t know, why this usage gives a device mismatch error and think a better error message should be raised.… |
st177553 | Hey @irleader, I think the problem is in lines (there are multiple lines) that try to use self.device in the following way. DataParallel would replicate your model to all provides/visible devices. So, in the above case, there will be four thread, with which thread having one replica of self.PE_net on a different device. However, DataParallel is not smart enough to modify self.device for you. So self.device all all threads will point to the same device, which is the one you passed to ContactAttention_simple_fix_PE() ctor. Hence, there will be a device mis-match.
self.PE_net(pe.view(-1, 111).to(self.device))
If pe is a tensor, DataParallel should have already scattered it to the correct device. Is there any reason for calling .to(self.device) again?
cc @VitalyFedyunin |
st177554 | Hi, Shen Li,
Thanks a lot for pointing this out. Those .to(self.device) were not there, I added them because I am trying to get rid of the error message. This is due to several posts saying that any layer of the model not defined in the init() but used in forward() should add .to(device).
Even if I remove all .to(self.device) from forward(), the error is still there.
I was thinking the error might be caused by def matrix_rep(self, x) which is defined outside of init() but used in forward(), but I have no idea how to modify it. |
st177555 | @ptrblck_de Hi ptrblck, Can you have to look at my code please? I see that you answered lots of similar problems. Thanks. |
st177556 | irleader:
Expected tensor for ‘out’ to have the same device as tensor for argument #2 ‘mat1’; but device 1 does not equal 0 (while checki
ng arguments for addmm)
Which line reported the above error? |
st177557 | Hi,
Th error message was reported on this line:
output = contact_net(PE_batch,seq_embedding_batch, state_pad)
Thanks. |
st177558 | As described by @mrshenli the to(device) calls inside the forward would cause this error and your model works without them:
import torch
import torch.nn as nn
import torch.nn.functional as F
class ContactAttention_simple(nn.Module):
def __init__(self, d,L):
super(ContactAttention_simple, self).__init__()
self.d = d
self.L = L
self.conv1d1= nn.Conv1d(in_channels=4, out_channels=d,
kernel_size=9, padding=8, dilation=2)
self.bn1 = nn.BatchNorm1d(d)
self.conv_test_1 = nn.Conv2d(in_channels=6*d, out_channels=d, kernel_size=1)
self.bn_conv_1 = nn.BatchNorm2d(d)
self.conv_test_2 = nn.Conv2d(in_channels=d, out_channels=d, kernel_size=1)
self.bn_conv_2 = nn.BatchNorm2d(d)
self.conv_test_3 = nn.Conv2d(in_channels=d, out_channels=1, kernel_size=1)
self.position_embedding_1d = nn.Parameter(
torch.randn(1, d, 600)
)
self.encoder_layer = nn.TransformerEncoderLayer(2*d, 2)
self.transformer_encoder = nn.TransformerEncoder(self.encoder_layer, 3)
def forward(self, prior, seq, state):
position_embeds = self.position_embedding_1d.repeat(seq.shape[0],1,1)
seq = seq.permute(0, 2, 1) # 4*L
seq = F.relu(self.bn1(self.conv1d1(seq))) #d*L just for increase the capacity
seq = torch.cat([seq, position_embeds], 1) # 2d*L
seq = self.transformer_encoder(seq.permute(-1, 0, 1))
seq = seq.permute(1, 2, 0)
seq_mat = self.matrix_rep(seq) # 4d*L*L
p_mat = self.matrix_rep(position_embeds) # 2d*L*L
infor = torch.cat([seq_mat, p_mat], 1) # 6d*L*L
contact = F.relu(self.bn_conv_1(self.conv_test_1(infor)))
contact = F.relu(self.bn_conv_2(self.conv_test_2(contact)))
contact = self.conv_test_3(contact)
contact = contact.view(-1, self.L, self.L)
contact = (contact+torch.transpose(contact, -1, -2))/2
return contact.view(-1, self.L, self.L)
def matrix_rep(self, x):
x = x.permute(0, 2, 1) # L*d
L = x.shape[1]
x2 = x
x = x.unsqueeze(1)
x2 = x2.unsqueeze(2)
x = x.repeat(1, L,1,1)
x2 = x2.repeat(1, 1, L,1)
mat = torch.cat([x,x2],-1) # L*L*2d
mat_tril = torch.tril(mat.permute(0, -1, 1, 2)) # 2d*L*L
mat_diag = mat_tril - torch.tril(mat.permute(0, -1, 1, 2), diagonal=-1)
mat = mat_tril + torch.transpose(mat_tril, -2, -1) - mat_diag
return mat
class ContactAttention_simple_fix_PE(ContactAttention_simple):
def __init__(self, d, L):
super(ContactAttention_simple_fix_PE, self).__init__(d, L)
self.PE_net = nn.Sequential(
nn.Linear(111,5*d),
nn.ReLU(),
nn.Linear(5*d,5*d),
nn.ReLU(),
nn.Linear(5*d,d))
def forward(self, pe, seq):
print(pe.shape, pe.device)
position_embeds = self.PE_net(pe.view(-1, 111)).view(-1, self.L, self.d) # N*L*111 -> N*L*d
position_embeds = position_embeds.permute(0, 2, 1) # N*d*L
seq = seq.permute(0, 2, 1) # 4*L
seq = F.relu(self.bn1(self.conv1d1(seq))) #d*L just for increase the capacity
seq = torch.cat([seq, position_embeds], 1) # 2d*L
seq = self.transformer_encoder(seq.permute(-1, 0, 1))
seq = seq.permute(1, 2, 0)
seq_mat = self.matrix_rep(seq) # 4d*L*L
p_mat = self.matrix_rep(position_embeds) # 2d*L*L
infor = torch.cat([seq_mat, p_mat], 1) # 6d*L*L
contact = F.relu(self.bn_conv_1(self.conv_test_1(infor)))
contact = F.relu(self.bn_conv_2(self.conv_test_2(contact)))
contact = self.conv_test_3(contact)
contact = contact.view(-1, self.L, self.L)
contact = ((contact+torch.transpose(contact, -1, -2))/2)
return contact.view(-1, self.L, self.L)
model = ContactAttention_simple_fix_PE(1, 111).cuda()
model = nn.DataParallel(model)
x = torch.randn(8, 111, 111).cuda()
seq = torch.randn(8, 111, 4).cuda()
out = model(x, seq)
print(out.shape)
Output without nn.DataParallel:
torch.Size([8, 111, 111]) cuda:0
torch.Size([8, 111, 111])
Output with nn.DataParallel:
torch.Size([1, 111, 111]) cuda:0
torch.Size([1, 111, 111]) cuda:1
torch.Size([1, 111, 111]) cuda:2
torch.Size([1, 111, 111]) cuda:3
torch.Size([1, 111, 111]) cuda:4
torch.Size([1, 111, 111]) cuda:5
torch.Size([1, 111, 111]) cuda:6
torch.Size([1, 111, 111]) cuda:7
torch.Size([8, 111, 111])
Note that I have used random tensors with shapes, which seem to work for this model, as I didn’t see any information regarding the shapes. |
st177559 | @ptrblck Thanks a lot for this. I tried with your code and it works well only on the first step, and then the error occurs again. Here is the full error message:
torch.Size([5, 600, 111]) cuda:0
torch.Size([5, 600, 111]) cuda:1
torch.Size([5, 600, 111]) cuda:2
torch.Size([5, 600, 111]) cuda:3
Stage 1, epoch: 0,step: 0, loss: 0.5405522584915161
torch.Size([2, 600, 111]) cuda:0
torch.Size([2, 600, 111]) cuda:1
torch.Size([1, 600, 111]) cuda:2
Traceback (most recent call last):
File “e2e_learning_stage1.py”, line 349, in
output = contact_net(PE_batch,seq_embedding_batch, state_pad)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py”, line 161, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py”, line 171, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py”, line 86, in parallel_apply
output.reraise()
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/_utils.py”, line 428, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py”, line 61, in _worker
output = module(input, **kwargs)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(input, **kwargs)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py”, line 161, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py”, line 171, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py”, line 86, in parallel_apply
output.reraise()
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/_utils.py”, line 428, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py”, line 61, in _worker
output = module(input, **kwargs)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(input, **kwargs)
File “/g/data/ik06/jiajia/e2efold_master/e2efold/models.py”, line 252, in forward
position_embeds = self.PE_net(pe.view(-1, 111)).view(-1, self.L, self.d) # NL111 -> NLd
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/modules/container.py”, line 117, in forward
input = module(input)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/modules/linear.py”, line 93, in forward
return F.linear(input, self.weight, self.bias)
File “/g/data/ik06/jiajia/python3packages/lib/python3.8/site-packages/torch/nn/functional.py”, line 1690, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: Expected tensor for ‘out’ to have the same device as tensor for argument #2 ‘mat1’; but device 0 does not equal 1 (while check
ing arguments for addmm)
I am using batch size 20, and 4 GPUs. I will also attach my training code here if helpful:
for epoch in range(epoches_first):
for contacts, seq_embeddings, matrix_reps, seq_lens in train_generator:
contact_net.train()
contacts_batch = torch.Tensor(contacts.float()).cuda()
seq_embedding_batch = torch.Tensor(seq_embeddings.float()).cuda()
matrix_reps_batch = torch.unsqueeze(torch.Tensor(matrix_reps.float()).cuda(), -1)
state_pad = torch.zeros([matrix_reps_batch.shape[0],seq_len, seq_len]).cuda()
PE_batch = get_pe(seq_lens, seq_len).float().cuda()
contact_masks = torch.Tensor(contact_map_masks(seq_lens, seq_len)).cuda()
contact_net = torch.nn.DataParallel(contact_net)
output = contact_net(PE_batch,seq_embedding_batch, state_pad)
# Compute loss
loss_u = criterion_bce_weighted(output*contact_masks, contacts_batch)
# print(steps_done)
if steps_done % OUT_STEP ==0:
print('Stage 1, epoch: {},step: {}, loss: {}'.format(
epoch, steps_done, loss_u))
# Optimize the model
u_optimizer.zero_grad()
loss_u.backward()
u_optimizer.step()
steps_done=steps_done+1
Just to add the torch_seed() method I used to make sure it’s not causing trouble:
def seed_torch(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # if you are using multi-GPU.
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True |
st177560 | Interestingly, it seems you are getting the error, if the batches are imbalanced, i.e. if the data cannot be equally split to the devices.
I just tested my code with a batch size of 5 (which is your last case before the error is raised) and it still works fine. Could you do the same and check, if you are seeing an error?
Also, are you using the latest PyTorch version? |
st177561 | @ptrblck Thanks again for your time. I have tested with batch size of 5 with 4 GPUs. The same error occurs:
torch.Size([2, 600, 111]) cuda:0
torch.Size([2, 600, 111]) cuda:1
torch.Size([1, 600, 111]) cuda:2
Stage 1, epoch: 0,step: 0, loss: 0.5292213559150696
torch.Size([1, 600, 111]) cuda:0
torch.Size([1, 600, 111]) cuda:1
Traceback (most recent call last):
File “e2e_learning_stage1.py”, line 343, in
output = contact_net(PE_batch,seq_embedding_batch, state_pad)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py”, line 161, in fo
rward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py”, line 171, in pa
rallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py”, line 86, in pa
rallel_apply
output.reraise()
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/_utils.py”, line 428, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py”, line 61, in _w
orker
output = module(input, **kwargs)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(input, **kwargs)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py”, line 161, in fo
rward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py”, line 171, in pa
rallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py”, line 86, in pa
rallel_apply
output.reraise()
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/_utils.py”, line 428, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py”, line 61, in _w
orker
output = module(input, **kwargs)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(input, **kwargs)
File “/g/data/ik06/jiajia/e2efold_master/e2efold/models.py”, line 252, in forward
position_embeds = self.PE_net(pe.view(-1, 111)).view(-1, self.L, self.d) # NL111 -> NLd
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py”, line 117, in forward
input = module(input)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py”, line 93, in forward
return F.linear(input, self.weight, self.bias)
File “/home/248/jx3129/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py”, line 1690, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: Expected tensor for ‘out’ to have the same device as tensor for argument #2 ‘mat1’; but device 0 does
not equal 1 (while checking arguments for addmm)
I am using pytorch 1.7.0. |
st177562 | The second iteration is now using only a batch size of 2 (or 3 in case the script is crashing before cuda:2 is executed), so in your script something is reducing the batch size. Could you post an executable code snippet (using my template) to reproduce this issue? |
st177563 | @ptrblck I am so sorry that the codes I am running is a big project which contains several scripts, it’s hard to merge them into one executable code snippet, can you give me your email or I upload to google drive and give you the link? Is that OK? |
st177564 | @ptrblck Thanks in advance for your time. I managed to merge all scripts into one and reproduced the error with it. The raw data I am using is uploaded to google doc , called test.pickle, put the script and data in the same directory: https://drive.google.com/file/d/1DxVtd9ejMns644EoF31Mf8JjQxK94JsH/view?usp=sharing 2
I attach my code here:
All functions and models needed:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils import data
import os
import torch.optim as optim
import math
import numpy as np
import _pickle as cPickle
import collections
from random import shuffle
os.environ["CUDA_VISIBLE_DEVICES"]= '0,1,2,3'
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
## model
class ContactAttention_simple(nn.Module):
"""docstring for ContactAttention_simple"""
def __init__(self, d,L):
super(ContactAttention_simple, self).__init__()
self.d = d
self.L = L
self.conv1d1= nn.Conv1d(in_channels=4, out_channels=d,
kernel_size=9, padding=8, dilation=2)
self.bn1 = nn.BatchNorm1d(d)
self.conv_test_1 = nn.Conv2d(in_channels=6*d, out_channels=d, kernel_size=1)
self.bn_conv_1 = nn.BatchNorm2d(d)
self.conv_test_2 = nn.Conv2d(in_channels=d, out_channels=d, kernel_size=1)
self.bn_conv_2 = nn.BatchNorm2d(d)
self.conv_test_3 = nn.Conv2d(in_channels=d, out_channels=1, kernel_size=1)
self.position_embedding_1d = nn.Parameter(
torch.randn(1, d, 600)
)
# transformer encoder for the input sequences
self.encoder_layer = nn.TransformerEncoderLayer(2*d, 2)
self.transformer_encoder = nn.TransformerEncoder(self.encoder_layer, 3)
def forward(self, prior, seq, state):
"""
prior: L*L*1
seq: L*4
state: L*L
"""
position_embeds = self.position_embedding_1d.repeat(seq.shape[0],1,1)
seq = seq.permute(0, 2, 1) # 4*L
seq = F.relu(self.bn1(self.conv1d1(seq))) #d*L just for increase the capacity
seq = torch.cat([seq, position_embeds], 1) # 2d*L
seq = self.transformer_encoder(seq.permute(-1, 0, 1))
seq = seq.permute(1, 2, 0)
# what about apply attention on the the 2d map?
seq_mat = self.matrix_rep(seq) # 4d*L*L
p_mat = self.matrix_rep(position_embeds) # 2d*L*L
infor = torch.cat([seq_mat, p_mat], 1) # 6d*L*L
contact = F.relu(self.bn_conv_1(self.conv_test_1(infor)))
contact = F.relu(self.bn_conv_2(self.conv_test_2(contact)))
contact = self.conv_test_3(contact)
contact = contact.view(-1, self.L, self.L)
contact = (contact+torch.transpose(contact, -1, -2))/2
return contact.view(-1, self.L, self.L)
def matrix_rep(self, x):
'''
for each position i,j of the matrix, we concatenate the embedding of i and j
'''
x = x.permute(0, 2, 1) # L*d
L = x.shape[1]
x2 = x
x = x.unsqueeze(1)
x2 = x2.unsqueeze(2)
x = x.repeat(1, L,1,1)
x2 = x2.repeat(1, 1, L,1)
mat = torch.cat([x,x2],-1) # L*L*2d
# make it symmetric
# mat_tril = torch.cat(
# [torch.tril(mat[:,:, i]) for i in range(mat.shape[-1])], -1)
mat_tril = torch.tril(mat.permute(0, -1, 1, 2)) # 2d*L*L
mat_diag = mat_tril - torch.tril(mat.permute(0, -1, 1, 2), diagonal=-1)
mat = mat_tril + torch.transpose(mat_tril, -2, -1) - mat_diag
return mat
class ContactAttention_simple_fix_PE(ContactAttention_simple):
"""docstring for ContactAttention_simple_fix_PE"""
def __init__(self, d, L, device):
super(ContactAttention_simple_fix_PE, self).__init__(d, L)
self.PE_net = nn.Sequential(
nn.Linear(111,5*d),
nn.ReLU(),
nn.Linear(5*d,5*d),
nn.ReLU(),
nn.Linear(5*d,d))
def forward(self, pe, seq, state):
"""
prior: L*L*1
seq: L*4
state: L*L
"""
print(pe.shape, pe.device)
position_embeds = self.PE_net(pe.view(-1, 111)).view(-1, self.L, self.d) # N*L*111 -> N*L*d
position_embeds = position_embeds.permute(0, 2, 1) # N*d*L
seq = seq.permute(0, 2, 1) # 4*L
seq = F.relu(self.bn1(self.conv1d1(seq))) #d*L just for increase the capacity
seq = torch.cat([seq, position_embeds], 1) # 2d*L
seq = self.transformer_encoder(seq.permute(-1, 0, 1))
seq = seq.permute(1, 2, 0)
# what about apply attention on the the 2d map?
seq_mat = self.matrix_rep(seq) # 4d*L*L
p_mat = self.matrix_rep(position_embeds) # 2d*L*L
infor = torch.cat([seq_mat, p_mat], 1) # 6d*L*L
contact = F.relu(self.bn_conv_1(self.conv_test_1(infor)))
contact = F.relu(self.bn_conv_2(self.conv_test_2(contact)))
contact = self.conv_test_3(contact)
contact = contact.view(-1, self.L, self.L)
contact = (contact+torch.transpose(contact, -1, -2))/2
return contact.view(-1, self.L, self.L)
char_dict = {
0: 'A',
1: 'U',
2: 'C',
3: 'G'
}
def encoding2seq(arr):
seq = list()
for arr_row in list(arr):
if sum(arr_row)==0:
seq.append('.')
else:
seq.append(char_dict[np.argmax(arr_row)])
return ''.join(seq)
class RNASSDataGenerator(object):
def __init__(self, data_dir, split, upsampling=False):
self.data_dir = data_dir
self.split = split
self.upsampling = upsampling
# Load vocab explicitly when needed
self.load_data()
# Reset batch pointer to zero
self.batch_pointer = 0
def load_data(self):
data_dir = self.data_dir
# Load the current split
RNA_SS_data = collections.namedtuple('RNA_SS_data','seq ss_label length name pairs')
with open(os.path.join(data_dir, '%s.pickle' % self.split), 'rb') as f:
self.data = cPickle.load(f)
#if self.upsampling:
# self.data = self.upsampling_data()
self.data_x = np.array([instance[0] for instance in self.data])
self.data_y = np.array([instance[1] for instance in self.data])
self.pairs = np.array([instance[-1] for instance in self.data])
self.seq_length = np.array([instance[2] for instance in self.data])
self.len = len(self.data)
self.seq = list(map(encoding2seq, self.data_x))
self.seq_max_len = len(self.data_x[0])
def pairs2map(self, pairs):
seq_len = self.seq_max_len
contact = np.zeros([seq_len, seq_len])
for pair in pairs:
contact[pair[0], pair[1]] = 1
return contact
def get_one_sample(self, index):
# This will return a smaller size if not sufficient
# The user must pad the batch in an external API
# Or write a TF module with variable batch size
data_y = self.data_y[index]
data_seq = self.data_x[index]
data_len = self.seq_length[index]
data_pair = self.pairs[index]
contact= self.pairs2map(data_pair)
matrix_rep = np.zeros(contact.shape)
return contact, data_seq, matrix_rep, data_len
class Dataset(data.Dataset):
def __init__(self, data):
self.data = data
def __len__(self):
return self.data.len
def __getitem__(self, index):
return self.data.get_one_sample(index)
#position embedding
def get_pe(seq_lens, max_len):
#batch_size*1--> batch_size N
num_seq = seq_lens.shape[0]
#absolute position: from 1 to 600 : N*L*1
pos_i_abs = torch.Tensor(np.arange(1,max_len+1)).view(1,
-1, 1).expand(num_seq, -1, -1).double()
#relatve position: from 1 to 600: N*L
pos_i_rel = torch.Tensor(np.arange(1,max_len+1)).view(1, -1).expand(num_seq, -1)
# N*L/N*1 --> N*L
pos_i_rel = pos_i_rel.double()/seq_lens.view(-1, 1).double()
pos_i_rel = pos_i_rel.unsqueeze(-1) #N*L*1
pos = torch.cat([pos_i_abs, pos_i_rel], -1) #N*L*2
PE_element_list = list()
# 1/x, 1/x^2
PE_element_list.append(pos) #N*L*2
PE_element_list.append(1.0/pos_i_abs) #N*L*1
PE_element_list.append(1.0/torch.pow(pos_i_abs, 2)) #N*L*1
# sin(nx)
for n in range(1, 50):
PE_element_list.append(torch.sin(n*pos)) # 49(N*L*2)
# poly
for i in range(2, 5):
PE_element_list.append(torch.pow(pos_i_rel, i)) #3(N*L*1)
for i in range(3):
gaussian_base = torch.exp(-torch.pow(pos,
2))*math.sqrt(math.pow(2,i)/math.factorial(i))*torch.pow(pos, i)
PE_element_list.append(gaussian_base) #3(N*L*2)
PE = torch.cat(PE_element_list, -1) #N*L*111
# zero padding
for i in range(num_seq):
PE[i, seq_lens[i]:, :] = 0
return PE
def contact_map_masks(seq_lens, max_len):
n_seq = len(seq_lens) #N
masks = np.zeros([n_seq, max_len, max_len]) #N*L*L
for i in range(n_seq):
l = int(seq_lens[i].cpu().numpy())
masks[i, :l, :l]=1
return masks
Data and Model Training:
train_data = RNASSDataGenerator('./','test')
seq_len = train_data.data_y.shape[-2]
params = {'batch_size': 8,
'shuffle': True,
'num_workers': 0,
'drop_last': True}
train_set = Dataset(train_data)
train_generator = data.DataLoader(train_set, **params)
contact_net = ContactAttention_simple_fix_PE(d=10, L=seq_len, device=device).to(device)
u_optimizer = optim.Adam(contact_net.parameters())
pos_weight = torch.Tensor([300]).to(device)
criterion_bce_weighted = torch.nn.BCEWithLogitsLoss(
pos_weight = pos_weight)
steps_done = 0
for epoch in range(50):
for contacts, seq_embeddings, matrix_reps, seq_lens in train_generator:
contact_net.train()
contacts_batch = torch.Tensor(contacts.float()).to(device)
seq_embedding_batch = torch.Tensor(seq_embeddings.float()).to(device)
matrix_reps_batch = torch.unsqueeze(torch.Tensor(matrix_reps.float()).to(device), -1)
# padding the states for supervised training with all 0s
state_pad = torch.zeros([matrix_reps_batch.shape[0],seq_len, seq_len]).to(device)
PE_batch = get_pe(seq_lens, seq_len).float().to(device)
contact_masks = torch.Tensor(contact_map_masks(seq_lens, seq_len)).to(device)
contact_net = torch.nn.DataParallel(contact_net)
output = contact_net(PE_batch,seq_embedding_batch, state_pad)
# Compute loss
loss_u = criterion_bce_weighted(output*contact_masks, contacts_batch)
# print(steps_done)
if steps_done % 100 ==0:
print('Stage 1, epoch: {},step: {}, loss: {}'.format(
epoch, steps_done, loss_u))
# Optimize the model
u_optimizer.zero_grad()
loss_u.backward()
u_optimizer.step()
steps_done=steps_done+1 |
st177565 | Are you seeing the same issue using random data? If so, could you post the shapes so that I could reproduce it without downloading your dataset? |
st177566 | @ptrblck Unfortunately not, since I am using pre defined functions (class RNASSDataGenerator) to pre-process the input data, I have to stick with the data format. I doubt the class RNASSDataGeneraor is causing the trouble.
When I use cpu, everything is ok with the batch shape for every step.
torch.Size([8, 600, 111]) cpu
Stage 1, epoch: 0,step: 0, loss: 0.7845466136932373
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu
torch.Size([8, 600, 111]) cpu |
st177567 | It seems the error is raised by rewrapping the model into nn.DataParallel in each iteration.
Move contact_net = torch.nn.DataParallel(contact_net) before the epoch loop and it should work.
I don’t know, why this usage gives a device mismatch error and think a better error message should be raised. Could you create a GitHub issue, so that we could track and fix it? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.