如何解决当两个相同的脚本同时在GPUpytorch DDP上运行时,会发生一些奇怪的ImportError / PermissionError
我有一个程序,该程序使用与pytorch一起使用分布式数据并行来执行muti-gpu并行性。
该脚本可以通过CUDA_VISIBLE_DEVICES=0,1 python ddp_test.py
正确运行,但是问题是,当我尝试通过CUDA_VISIBLE_DEVICES=2,3 python ddp_test.py
在同一节点上同时运行同一脚本时,我运行的第二个脚本将发生ImportError / PermissionError。>
这是一个最小的示例,ddp_test.py
是一个小型培训程序,ddp_import.py
和ddp_import_import.py
是用来模拟我的真实培训程序的。
# ddp_test.py
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
from ddp_import import *
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n','--nodes',default=1,type=int,metavar='N',help='number of data loading workers (default: 4)')
parser.add_argument('-g','--gpus',default=2,help='number of gpus per node')
parser.add_argument('-nr','--nr',default=0,help='ranking within the nodes')
parser.add_argument('--epochs',default=50,help='number of total epochs to run')
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '1234'
mp.spawn(train,nprocs=args.gpus,args=(args,))
class ConvNet(nn.Module):
def __init__(self,num_classes=10):
super(ConvNet,self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1,16,kernel_size=5,stride=1,padding=2),nn.BatchNorm2d(16),nn.ReLU(),nn.MaxPool2d(kernel_size=2,stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16,32,nn.BatchNorm2d(32),stride=2))
self.fc = nn.Linear(7*7*32,num_classes)
def forward(self,x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0),-1)
out = self.fc(out)
return out
def train(gpu,args):
rank = args.nr * args.gpus + gpu
dist.init_process_group(backend='nccl',init_method='env://',world_size=args.world_size,rank=rank)
torch.manual_seed(0)
model = ConvNet()
print("rank:{} gpu:{}".format(rank,gpu))
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 20
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(),1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model,device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='./data',train=True,transform=transforms.ToTensor(),download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,num_replicas=args.world_size,rank=rank)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=False,num_workers=0,pin_memory=True,sampler=train_sampler)
start = datetime.now()
total_step = len(train_loader)
for epoch in range(args.epochs):
for i,(images,labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs,labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}],Step [{}/{}],Loss: {:.4f}'.format(epoch + 1,args.epochs,i + 1,total_step,loss.item()))
if gpu == 0:
print("Training complete in: " + str(datetime.now() - start))
if __name__ == '__main__':
main()
# ddp_import.py
from ddp_import_import import *
# ddp_import_import.py
import nltk
from pycocotools.coco import COCO
当我通过CUDA_VISIBLE_DEVICES=0,1 python ddp_test.py
运行脚本时,脚本可以正常工作。但是,如果同时运行两个此脚本,一个运行CUDA_VISIBLE_DEVICES=0,1 python ddp_test.py
,另一个运行CUDA_VISIBLE_DEVICES=2,3 python ddp_test.py
(节点上有4 gpu),则会发生错误。
PermissionError,有时是ImportError。
我是ddp的新手,这个问题使我困惑了好几天。谁能告诉我问题出在哪里以及如何解决。非常感谢!
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。