ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

pytorch加速训练过程(单机多卡)

2022-09-11 17:01:27  阅读:273  来源: 互联网

标签:... args 单机 torch 多卡 pytorch train cuda model


第一种方式:nn.DataParallel方式

# main.py import torch import torch.distributed as dist gpus = [0, 1, 2, 3]#指定有哪些gpu torch.cuda.set_device('cuda:{}'.format(gpus[0]))# train_dataset = ... train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=...) model = ... model = nn.DataParallel(model.to(device), device_ids=gpus, output_device=gpus[0])#output_device为指定哪一张卡进行汇总梯度,这张卡内存要大一些一般
optimizer = optim.SGD(model.parameters()) 

for epoch in range(100):
  for batch_idx, (data, target) in enumerate(train_loader):
    images = images.cuda(non_blocking=True)#将数据放入cuda
    target = target.cuda(non_blocking=True)#将标签放入cuda
    ...
    output = model(images)
    loss = criterion(output, target)
    ...
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

#训练的话直接python main.py

 

第二种方式:使用torch.distributed加速
# main.py import torch import argparse import torch.distributed as dist #获取当前Gpu进程的index parser = argparse.ArgumentParser() parser.add_argument('--local_rank', default=-1, type=int, help='node rank for distributed training') args = parser.parse_args() #设置GPU之间通信使用的后端和端口 dist.init_process_group(backend='nccl') torch.cuda.set_device(args.local_rank) #使用DistributedSampler对数据集进行划分
train_dataset = ... 
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=..., sampler=train_sampler)
#使用 DistributedDataParallel 包装模型,汇总不同GPU算得的梯度,并同步计算结果
model = ...
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank])
optimizer = optim.SGD(model.parameters())
for epoch in range(100):
  for batch_idx, (data, target) in enumerate(train_loader):
    images = images.cuda(non_blocking=True)
    target = target.cuda(non_blocking=True)
    ...
    output = model(images)
    loss = criterion(output, target)
    ...
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
#启动方法
CUDA_VISIBLE_DEVICES=0,1,2,3 #指定使用哪些GPU,--nproc_per_node表示有几个进程,有几个GPU就有几个进程

python -m torch.distributed.launch --nproc_per_node=4 main.py
#使用 torch.multiprocessing 取代启动器

# main.py
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
args=parser.parse_args()
args.nprocs=torch.cuda.device_count() mp.spawn(main_worker, nprocs=args.nprocs, args=(args.nprocs, args)) def main_worker(local_rank, nprocs, args): dist.init_process_group(backend='nccl', init_method='tcp://127.0.0.1:23456', world_size=args.nprocs, rank=local_rank) torch.cuda.set_device(args.local_rank) train_dataset = ... train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=..., sampler=train_sampler) model = ... model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank]) optimizer = optim.SGD(model.parameters()) for epoch in range(100): for batch_idx, (data, target) in enumerate(train_loader): images = images.cuda(non_blocking=True) target = target.cuda(non_blocking=True) ... output = model(images) loss = criterion(output, target) ... optimizer.zero_grad() loss.backward() optimizer.step()
 #启动方法:python main.py

 

第三种:使用Apex加速
# main.py import torch import argparse import torch.distributed as dist from apex.parallel import DistributedDataParallel parser = argparse.ArgumentParser() parser.add_argument('--local_rank', default=-1, type=int, help='node rank for distributed training') args = parser.parse_args() dist.init_process_group(backend='nccl') torch.cuda.set_device(args.local_rank) train_dataset = ... train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=..., sampler=train_sampler) model = ... model, optimizer = amp.initialize(model, optimizer) model = DistributedDataParallel(model, device_ids=[args.local_rank]) optimizer = optim.SGD(model.parameters()) for epoch in range(100): for batch_idx, (data, target) in enumerate(train_loader): images = images.cuda(non_blocking=True) target = target.cuda(non_blocking=True) ... output = model(images) loss = criterion(output, target) optimizer.zero_grad() with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() optimizer.step()
# 启动方法
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 main.py

  

标签:...,args,单机,torch,多卡,pytorch,train,cuda,model
来源: https://www.cnblogs.com/zhaojianhui/p/16684209.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有