范文健康探索娱乐情感热点
投稿投诉
热点动态
科技财经
情感日志
励志美文
娱乐时尚
游戏搞笑
探索旅游
历史星座
健康养生
美丽育儿
范文作文
教案论文

深度学习训练模型时,GPU显存不够怎么办?

  作者丨游客26024@知乎(已授权)
  来源丨https://www.zhihu.com/question/461811359/answer/2492822726
  编辑丨极市平台
  题外话,我为什么要写这篇博客,就是因为 我穷 ! 没钱 !租的服务器使用多GPU时一会钱就烧没了(gpu内存不用),急需要一种trick,来降低内存加速。
  回到正题,如果我们使用的 数据集较大 ,且 网络较深 ,则会造成 训练较慢 ,此时我们要 想加速训练 可以使用 Pytorch的AMP ( autocast与Gradscaler );本文便是依据此写出的博文,对 Pytorch的AMP ( autocast与Gradscaler 进行对比) 自动混合精度对模型训练加速 。
  注意Pytorch1.6+,已经内置torch.cuda.amp,因此便不需要加载NVIDIA的apex库(半精度加速),为方便我们便 不使用NVIDIA的apex库 (安装麻烦),转而 使用torch.cuda.amp 。
  AMP  (Automatic mixed precision): 自动混合精度,那 什么是自动混合精度 ?
  先来梳理一下历史:先有NVIDIA的apex,之后NVIDIA的开发人员将其贡献到Pytorch 1.6+产生了torch.cuda.amp[这是笔者梳理,可能有误,请留言]
  详细讲:默认情况下,大多数深度学习框架都采用32位浮点算法进行训练。2017年,NVIDIA研究了一种用于混合精度训练的方法(apex),该方法在训练网络时将单精度(FP32)与半精度(FP16)结合在一起,并使用相同的超参数实现了与FP32几乎相同的精度,且速度比之前快了不少
  之后,来到了AMP时代(特指torch.cuda.amp),此有两个关键词: 自动 与 混合精度 (Pytorch 1.6+中的torch.cuda.amp)其中, 自动 表现在Tensor的dtype类型会自动变化,框架按需自动调整tensor的dtype,可能有些地方需要手动干预; 混合精度 表现在采用不止一种精度的Tensor, torch.FloatTensor与torch.HalfTensor。并且从名字可以看出torch.cuda.amp,这个功能 只能在cuda上使用 ! 为什么我们要使用AMP自动混合精度?
  1.减少显存占用(FP16优势)
  2.加快训练和推断的计算(FP16优势)
  3.张量核心的普及(NVIDIA Tensor Core),低精度(FP16优势)
  4. 混合精度训练缓解舍入误差问题,(FP16有此劣势,但是FP32可以避免此)
  5.损失放大,可能使用混合精度还会出现无法收敛的问题[其原因时激活梯度值较小],造成了溢出,则可以通过使用torch.cuda.amp.GradScaler放大损失来防止梯度的下溢
  申明此篇博文 主旨 为 如何让网络模型加速训练 ,而非去了解其原理,且其以AlexNet为网络架构(其需要输入的图像大小为227x227x3),CIFAR10为数据集,Adamw为梯度下降函数,学习率机制为ReduceLROnPlateau举例。使用的电脑是2060的拯救者,虽然渣,但是还是可以搞搞这些测试。
  本文从1.没使用DDP与DP训练与评估代码(之后加入amp),2.分布式DP训练与评估代码(之后加入amp),3.单进程占用多卡DDP训练与评估代码(之后加入amp) 角度讲解。
  运行此程序时,文件的结构: D:/PycharmProject/Simple-CV-Pytorch-master | | | |----AMP(train_without.py、train_DP.py、train_autocast.py、train_GradScaler.py、eval_XXX.py |等,之后加入的alexnet也在这里,alexnet.py) | | | |----tensorboard(保存tensorboard的文件夹) | | | |----checkpoint(保存模型的文件夹) | | | |----data(数据集所在文件夹)1.没使用DDP与DP训练与评估代码
  没使用DDP与DP的训练与评估实验,作为我们实验的参照组 (1)原本模型的训练与评估源码:
  训练源码:
  注意:此段代码无比简陋,仅为代码的雏形,大致能理解尚可!
  train_without.py import time import torch import torchvision from torch import nn from torch.utils.data import DataLoader from torchvision.models import alexnet from torchvision import transforms from torch.utils.tensorboard import SummaryWriter import numpy as np import argparse     def parse_args():     parser = argparse.ArgumentParser(description="CV Train")     parser.add_mutually_exclusive_group()     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--tensorboard", type=str, default=True, help="Use tensorboard for loss visualization")     parser.add_argument("--tensorboard_log", type=str, default="../tensorboard", help="tensorboard folder")     parser.add_argument("--cuda", type=str, default=True, help="if is cuda available")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")     parser.add_argument("--epochs", type=int, default=20, help="Number of epochs to train.")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()   # 1.Create SummaryWriter if args.tensorboard:     writer = SummaryWriter(args.tensorboard_log)   # 2.Ready dataset if args.dataset == "CIFAR10":     train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(         [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True) else:     raise ValueError("Dataset is not CIFAR10") cuda = torch.cuda.is_available() print("CUDA available: {}".format(cuda))   # 3.Length train_dataset_size = len(train_dataset) print("the train dataset size is {}".format(train_dataset_size))   # 4.DataLoader train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)   # 5.Create model model = alexnet()   if args.cuda == cuda:     model = model.cuda()   # 6.Create loss cross_entropy_loss = nn.CrossEntropyLoss()   # 7.Optimizer optim = torch.optim.AdamW(model.parameters(), lr=args.lr) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True) # 8. Set some parameters to control loop # epoch iter = 0 t0 = time.time() for epoch in range(args.epochs):     t1 = time.time()     print(" -----------------the {} number of training epoch --------------".format(epoch))     model.train()     for data in train_dataloader:         loss = 0         imgs, targets = data         if args.cuda == cuda:             cross_entropy_loss = cross_entropy_loss.cuda()             imgs, targets = imgs.cuda(), targets.cuda()         outputs = model(imgs)         loss_train = cross_entropy_loss(outputs, targets)         loss = loss_train.item() + loss         if args.tensorboard:             writer.add_scalar("train_loss", loss_train.item(), iter)           optim.zero_grad()         loss_train.backward()         optim.step()         iter = iter + 1         if iter % 100 == 0:             print(                 "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "                     .format(epoch, iter, optim.param_groups[0]["lr"], loss_train.item(),                             np.mean(loss)))     if args.tensorboard:         writer.add_scalar("lr", optim.param_groups[0]["lr"], epoch)     scheduler.step(np.mean(loss))     t2 = time.time()     h = (t2 - t1) // 3600     m = ((t2 - t1) % 3600) // 60     s = ((t2 - t1) % 3600) % 60     print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))       if epoch % 1 == 0:         print("Save state, iter: {} ".format(epoch))         torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))   torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint)) t3 = time.time() h_t = (t3 - t0) // 3600 m_t = ((t3 - t0) % 3600) // 60 s_t = ((t3 - t0) % 3600) // 60 print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t))) if args.tensorboard:     writer.close()
  运行结果:
  Tensorboard观察:
  评估源码:
  代码特别粗犷,尤其是device与精度计算,仅供参考,切勿模仿!
  eval_without.py import torch import torchvision from torch.utils.data import DataLoader from torchvision.transforms import transforms from alexnet import alexnet import argparse     # eval def parse_args():     parser = argparse.ArgumentParser(description="CV Evaluation")     parser.add_mutually_exclusive_group()     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args() # 1.Create model model = alexnet()     # 2.Ready Dataset if args.dataset == "CIFAR10":     test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,                                                 transform=transforms.Compose(                                                     [transforms.Resize(args.img_size),                                                      transforms.ToTensor()]),                                                 download=True) else:     raise ValueError("Dataset is not CIFAR10") # 3.Length test_dataset_size = len(test_dataset) print("the test dataset size is {}".format(test_dataset_size))   # 4.DataLoader test_dataloader = DataLoader(dataset=test_dataset, batch_size=args.batch_size)   # 5. Set some parameters for testing the network total_accuracy = 0   # test model.eval() with torch.no_grad():     for data in test_dataloader:         imgs, targets = data         device = torch.device("cpu")         imgs, targets = imgs.to(device), targets.to(device)         model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)         model.load_state_dict(model_load)         outputs = model(imgs)         outputs = outputs.to(device)         accuracy = (outputs.argmax(1) == targets).sum()         total_accuracy = total_accuracy + accuracy         accuracy = total_accuracy / test_dataset_size     print("the total accuracy is {}".format(accuracy))
  运行结果:
  分析:
  原本模型训练完20个epochs花费了22分22秒,得到的准确率为0.8191 (2)原本模型加入autocast的训练与评估源码:
  训练源码:
  训练大致代码流程: from torch.cuda.amp import autocast as autocast   ...   # Create model, default torch.FloatTensor model = Net().cuda()   # SGD,Adm, Admw,... optim = optim.XXX(model.parameters(),..)   ...   for imgs,targets in dataloader:     imgs,targets = imgs.cuda(),targets.cuda()       ....     with autocast():         outputs = model(imgs)         loss = loss_fn(outputs,targets)    ...     optim.zero_grad()     loss.backward()     optim.step()   ...
  train_autocast_without.py import time import torch import torchvision from torch import nn from torch.cuda.amp import autocast from torchvision import transforms from torchvision.models import alexnet from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter import numpy as np import argparse     def parse_args():     parser = argparse.ArgumentParser(description="CV Train")     parser.add_mutually_exclusive_group()     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--tensorboard", type=str, default=True, help="Use tensorboard for loss visualization")     parser.add_argument("--tensorboard_log", type=str, default="../tensorboard", help="tensorboard folder")     parser.add_argument("--cuda", type=str, default=True, help="if is cuda available")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")     parser.add_argument("--epochs", type=int, default=20, help="Number of epochs to train.")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()   # 1.Create SummaryWriter if args.tensorboard:     writer = SummaryWriter(args.tensorboard_log)   # 2.Ready dataset if args.dataset == "CIFAR10":     train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(         [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True) else:     raise ValueError("Dataset is not CIFAR10") cuda = torch.cuda.is_available() print("CUDA available: {}".format(cuda))   # 3.Length train_dataset_size = len(train_dataset) print("the train dataset size is {}".format(train_dataset_size))   # 4.DataLoader train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)   # 5.Create model model = alexnet()   if args.cuda == cuda:     model = model.cuda()   # 6.Create loss cross_entropy_loss = nn.CrossEntropyLoss()   # 7.Optimizer optim = torch.optim.AdamW(model.parameters(), lr=args.lr) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True) # 8. Set some parameters to control loop # epoch iter = 0 t0 = time.time() for epoch in range(args.epochs):     t1 = time.time()     print(" -----------------the {} number of training epoch --------------".format(epoch))     model.train()     for data in train_dataloader:         loss = 0         imgs, targets = data         if args.cuda == cuda:             cross_entropy_loss = cross_entropy_loss.cuda()             imgs, targets = imgs.cuda(), targets.cuda()         with autocast():             outputs = model(imgs)             loss_train = cross_entropy_loss(outputs, targets)         loss = loss_train.item() + loss         if args.tensorboard:             writer.add_scalar("train_loss", loss_train.item(), iter)           optim.zero_grad()         loss_train.backward()         optim.step()         iter = iter + 1         if iter % 100 == 0:             print(                 "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "                     .format(epoch, iter, optim.param_groups[0]["lr"], loss_train.item(),                             np.mean(loss)))     if args.tensorboard:         writer.add_scalar("lr", optim.param_groups[0]["lr"], epoch)     scheduler.step(np.mean(loss))     t2 = time.time()     h = (t2 - t1) // 3600     m = ((t2 - t1) % 3600) // 60     s = ((t2 - t1) % 3600) % 60     print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))       if epoch % 1 == 0:         print("Save state, iter: {} ".format(epoch))         torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))   torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint)) t3 = time.time() h_t = (t3 - t0) // 3600 m_t = ((t3 - t0) % 3600) // 60 s_t = ((t3 - t0) % 3600) // 60 print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t))) if args.tensorboard:     writer.close()
  运行结果:
  Tensorboard观察:
  评估源码:
  eval_without.py 和 1.(1)一样
  运行结果:
  分析:
  原本模型训练完20个epochs花费了22分22秒,加入autocast之后模型花费的时间为21分21秒,说明模型速度增加了,并且准确率从之前的0.8191提升到0.8403 (3)原本模型加入autocast与GradScaler的训练与评估源码:
  使用torch.cuda.amp.GradScaler是放大损失值来防止梯度的下溢
  训练源码:
  训练大致代码流程: from torch.cuda.amp import autocast as autocast from torch.cuda.amp import GradScaler as GradScaler ...   # Create model, default torch.FloatTensor model = Net().cuda()   # SGD,Adm, Admw,... optim = optim.XXX(model.parameters(),..) scaler = GradScaler()   ...   for imgs,targets in dataloader:     imgs,targets = imgs.cuda(),targets.cuda()     ...     optim.zero_grad()     ....     with autocast():         outputs = model(imgs)         loss = loss_fn(outputs,targets)       scaler.scale(loss).backward()     scaler.step(optim)     scaler.update() ...
  train_GradScaler_without.py import time import torch import torchvision from torch import nn from torch.cuda.amp import autocast, GradScaler from torchvision import transforms from torchvision.models import alexnet from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter import numpy as np import argparse     def parse_args():     parser = argparse.ArgumentParser(description="CV Train")     parser.add_mutually_exclusive_group()     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--tensorboard", type=str, default=True, help="Use tensorboard for loss visualization")     parser.add_argument("--tensorboard_log", type=str, default="../tensorboard", help="tensorboard folder")     parser.add_argument("--cuda", type=str, default=True, help="if is cuda available")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")     parser.add_argument("--epochs", type=int, default=20, help="Number of epochs to train.")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()   # 1.Create SummaryWriter if args.tensorboard:     writer = SummaryWriter(args.tensorboard_log)   # 2.Ready dataset if args.dataset == "CIFAR10":     train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(         [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True) else:     raise ValueError("Dataset is not CIFAR10") cuda = torch.cuda.is_available() print("CUDA available: {}".format(cuda))   # 3.Length train_dataset_size = len(train_dataset) print("the train dataset size is {}".format(train_dataset_size))   # 4.DataLoader train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)   # 5.Create model model = alexnet()   if args.cuda == cuda:     model = model.cuda()   # 6.Create loss cross_entropy_loss = nn.CrossEntropyLoss()   # 7.Optimizer optim = torch.optim.AdamW(model.parameters(), lr=args.lr) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True) scaler = GradScaler() # 8. Set some parameters to control loop # epoch iter = 0 t0 = time.time() for epoch in range(args.epochs):     t1 = time.time()     print(" -----------------the {} number of training epoch --------------".format(epoch))     model.train()     for data in train_dataloader:         loss = 0         imgs, targets = data         optim.zero_grad()         if args.cuda == cuda:             cross_entropy_loss = cross_entropy_loss.cuda()             imgs, targets = imgs.cuda(), targets.cuda()         with autocast():             outputs = model(imgs)             loss_train = cross_entropy_loss(outputs, targets)             loss = loss_train.item() + loss         if args.tensorboard:             writer.add_scalar("train_loss", loss_train.item(), iter)           scaler.scale(loss_train).backward()         scaler.step(optim)         scaler.update()         iter = iter + 1         if iter % 100 == 0:             print(                 "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "                     .format(epoch, iter, optim.param_groups[0]["lr"], loss_train.item(),                             np.mean(loss)))     if args.tensorboard:         writer.add_scalar("lr", optim.param_groups[0]["lr"], epoch)     scheduler.step(np.mean(loss))     t2 = time.time()     h = (t2 - t1) // 3600     m = ((t2 - t1) % 3600) // 60     s = ((t2 - t1) % 3600) % 60     print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))       if epoch % 1 == 0:         print("Save state, iter: {} ".format(epoch))         torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))   torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint)) t3 = time.time() h_t = (t3 - t0) // 3600 m_t = ((t3 - t0) % 3600) // 60 s_t = ((t3 - t0) % 3600) // 60 print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t))) if args.tensorboard:     writer.close()
  运行结果:
  Tensorboard观察:
  评估源码:
  eval_without.py 和 1.(1)一样
  运行结果:
  分析:
  为什么,我们训练完20个epochs花费了27分27秒,比之前原模型未使用任何amp的时间(22分22秒)都多了?
  这是因为我们使用了GradScaler放大了损失降低了模型训练的速度,还有个原因可能是笔者自身的显卡太小,没有起到加速的作用 2.分布式DP训练与评估代码(1)DP原本模型的训练与评估源码:
  训练源码:
  train_DP.py import time import torch import torchvision from torch import nn from torch.utils.data import DataLoader from torchvision.models import alexnet from torchvision import transforms from torch.utils.tensorboard import SummaryWriter import numpy as np import argparse     def parse_args():     parser = argparse.ArgumentParser(description="CV Train")     parser.add_mutually_exclusive_group()     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--tensorboard", type=str, default=True, help="Use tensorboard for loss visualization")     parser.add_argument("--tensorboard_log", type=str, default="../tensorboard", help="tensorboard folder")     parser.add_argument("--cuda", type=str, default=True, help="if is cuda available")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")     parser.add_argument("--epochs", type=int, default=20, help="Number of epochs to train.")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()   # 1.Create SummaryWriter if args.tensorboard:     writer = SummaryWriter(args.tensorboard_log)   # 2.Ready dataset if args.dataset == "CIFAR10":     train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(         [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True) else:     raise ValueError("Dataset is not CIFAR10") cuda = torch.cuda.is_available() print("CUDA available: {}".format(cuda))   # 3.Length train_dataset_size = len(train_dataset) print("the train dataset size is {}".format(train_dataset_size))   # 4.DataLoader train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)   # 5.Create model model = alexnet()   if args.cuda == cuda:     model = model.cuda()     model = torch.nn.DataParallel(model).cuda() else:     model = torch.nn.DataParallel(model)   # 6.Create loss cross_entropy_loss = nn.CrossEntropyLoss()   # 7.Optimizer optim = torch.optim.AdamW(model.parameters(), lr=args.lr) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True) # 8. Set some parameters to control loop # epoch iter = 0 t0 = time.time() for epoch in range(args.epochs):     t1 = time.time()     print(" -----------------the {} number of training epoch --------------".format(epoch))     model.train()     for data in train_dataloader:         loss = 0         imgs, targets = data         if args.cuda == cuda:             cross_entropy_loss = cross_entropy_loss.cuda()             imgs, targets = imgs.cuda(), targets.cuda()         outputs = model(imgs)         loss_train = cross_entropy_loss(outputs, targets)         loss = loss_train.item() + loss         if args.tensorboard:             writer.add_scalar("train_loss", loss_train.item(), iter)           optim.zero_grad()         loss_train.backward()         optim.step()         iter = iter + 1         if iter % 100 == 0:             print(                 "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "                     .format(epoch, iter, optim.param_groups[0]["lr"], loss_train.item(),                             np.mean(loss)))     if args.tensorboard:         writer.add_scalar("lr", optim.param_groups[0]["lr"], epoch)     scheduler.step(np.mean(loss))     t2 = time.time()     h = (t2 - t1) // 3600     m = ((t2 - t1) % 3600) // 60     s = ((t2 - t1) % 3600) % 60     print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))       if epoch % 1 == 0:         print("Save state, iter: {} ".format(epoch))         torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))   torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint)) t3 = time.time() h_t = (t3 - t0) // 3600 m_t = ((t3 - t0) % 3600) // 60 s_t = ((t3 - t0) % 3600) // 60 print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t))) if args.tensorboard:     writer.close()
  运行结果:
  Tensorboard观察:
  评估源码:
  eval_DP.py import torch import torchvision from torch.utils.data import DataLoader from torchvision.transforms import transforms from alexnet import alexnet import argparse     # eval def parse_args():     parser = argparse.ArgumentParser(description="CV Evaluation")     parser.add_mutually_exclusive_group()     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args() # 1.Create model model = alexnet() model = torch.nn.DataParallel(model)   # 2.Ready Dataset if args.dataset == "CIFAR10":     test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,                                                 transform=transforms.Compose(                                                     [transforms.Resize(args.img_size),                                                      transforms.ToTensor()]),                                                 download=True) else:     raise ValueError("Dataset is not CIFAR10") # 3.Length test_dataset_size = len(test_dataset) print("the test dataset size is {}".format(test_dataset_size))   # 4.DataLoader test_dataloader = DataLoader(dataset=test_dataset, batch_size=args.batch_size)   # 5. Set some parameters for testing the network total_accuracy = 0   # test model.eval() with torch.no_grad():     for data in test_dataloader:         imgs, targets = data         device = torch.device("cpu")         imgs, targets = imgs.to(device), targets.to(device)         model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)         model.load_state_dict(model_load)         outputs = model(imgs)         outputs = outputs.to(device)         accuracy = (outputs.argmax(1) == targets).sum()         total_accuracy = total_accuracy + accuracy         accuracy = total_accuracy / test_dataset_size     print("the total accuracy is {}".format(accuracy))
  运行结果:
  (2)DP使用autocast的训练与评估源码:
  训练源码:
  如果你 这样写代码 ,那么你的代码 无效 !!! ...     model = Model()     model = torch.nn.DataParallel(model)     ...     with autocast():         output = model(imgs)         loss = loss_fn(output)
  正确写法 ,训练大致流程代码: 1.Model(nn.Module):       @autocast()       def forward(self, input):       ...   2.Model(nn.Module):       def foward(self, input):           with autocast():               ...
  1与2皆可,之后: ... model = Model() model = torch.nn.DataParallel(model) with autocast():     output = model(imgs)     loss = loss_fn(output) ...
  模型:
  须在forward函数上加入@autocast()或者在forward里面最上面加入with autocast():
  alexnet.py import torch import torch.nn as nn from torchvision.models.utils import load_state_dict_from_url from torch.cuda.amp import autocast from typing import Any   __all__ = ["AlexNet", "alexnet"]   model_urls = {     "alexnet": "https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth", }     class AlexNet(nn.Module):       def __init__(self, num_classes: int = 1000) -> None:         super(AlexNet, self).__init__()         self.features = nn.Sequential(             nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),             nn.ReLU(inplace=True),             nn.MaxPool2d(kernel_size=3, stride=2),             nn.Conv2d(64, 192, kernel_size=5, padding=2),             nn.ReLU(inplace=True),             nn.MaxPool2d(kernel_size=3, stride=2),             nn.Conv2d(192, 384, kernel_size=3, padding=1),             nn.ReLU(inplace=True),             nn.Conv2d(384, 256, kernel_size=3, padding=1),             nn.ReLU(inplace=True),             nn.Conv2d(256, 256, kernel_size=3, padding=1),             nn.ReLU(inplace=True),             nn.MaxPool2d(kernel_size=3, stride=2),         )         self.avgpool = nn.AdaptiveAvgPool2d((6, 6))         self.classifier = nn.Sequential(             nn.Dropout(),             nn.Linear(256 * 6 * 6, 4096),             nn.ReLU(inplace=True),             nn.Dropout(),             nn.Linear(4096, 4096),             nn.ReLU(inplace=True),             nn.Linear(4096, num_classes),         )       @autocast()     def forward(self, x: torch.Tensor) -> torch.Tensor:         x = self.features(x)         x = self.avgpool(x)         x = torch.flatten(x, 1)         x = self.classifier(x)         return x     def alexnet(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> AlexNet:     r"""AlexNet model architecture from the     `"One weird trick..." `_ paper.     Args:         pretrained (bool): If True, returns a model pre-trained on ImageNet         progress (bool): If True, displays a progress bar of the download to stderr     """     model = AlexNet(**kwargs)     if pretrained:         state_dict = load_state_dict_from_url(model_urls["alexnet"],                                               progress=progress)         model.load_state_dict(state_dict)     return model
  train_DP_autocast.py 导入自己的alexnet.py import time import torch from alexnet import alexnet import torchvision from torch import nn from torch.utils.data import DataLoader from torchvision import transforms from torch.cuda.amp import autocast as autocast from torch.utils.tensorboard import SummaryWriter import numpy as np import argparse     def parse_args():     parser = argparse.ArgumentParser(description="CV Train")     parser.add_mutually_exclusive_group()     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--tensorboard", type=str, default=True, help="Use tensorboard for loss visualization")     parser.add_argument("--tensorboard_log", type=str, default="../tensorboard", help="tensorboard folder")     parser.add_argument("--cuda", type=str, default=True, help="if is cuda available")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")     parser.add_argument("--epochs", type=int, default=20, help="Number of epochs to train.")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()   # 1.Create SummaryWriter if args.tensorboard:     writer = SummaryWriter(args.tensorboard_log)   # 2.Ready dataset if args.dataset == "CIFAR10":     train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(         [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True) else:     raise ValueError("Dataset is not CIFAR10") cuda = torch.cuda.is_available() print("CUDA available: {}".format(cuda))   # 3.Length train_dataset_size = len(train_dataset) print("the train dataset size is {}".format(train_dataset_size))   # 4.DataLoader train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)   # 5.Create model model = alexnet()   if args.cuda == cuda:     model = model.cuda()     model = torch.nn.DataParallel(model).cuda() else:     model = torch.nn.DataParallel(model)   # 6.Create loss cross_entropy_loss = nn.CrossEntropyLoss()   # 7.Optimizer optim = torch.optim.AdamW(model.parameters(), lr=args.lr) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True) # 8. Set some parameters to control loop # epoch iter = 0 t0 = time.time() for epoch in range(args.epochs):     t1 = time.time()     print(" -----------------the {} number of training epoch --------------".format(epoch))     model.train()     for data in train_dataloader:         loss = 0         imgs, targets = data         if args.cuda == cuda:             cross_entropy_loss = cross_entropy_loss.cuda()             imgs, targets = imgs.cuda(), targets.cuda()         with autocast():             outputs = model(imgs)             loss_train = cross_entropy_loss(outputs, targets)         loss = loss_train.item() + loss         if args.tensorboard:             writer.add_scalar("train_loss", loss_train.item(), iter)           optim.zero_grad()         loss_train.backward()         optim.step()         iter = iter + 1         if iter % 100 == 0:             print(                 "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "                     .format(epoch, iter, optim.param_groups[0]["lr"], loss_train.item(),                             np.mean(loss)))     if args.tensorboard:         writer.add_scalar("lr", optim.param_groups[0]["lr"], epoch)     scheduler.step(np.mean(loss))     t2 = time.time()     h = (t2 - t1) // 3600     m = ((t2 - t1) % 3600) // 60     s = ((t2 - t1) % 3600) % 60     print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))       if epoch % 1 == 0:         print("Save state, iter: {} ".format(epoch))         torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))   torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint)) t3 = time.time() h_t = (t3 - t0) // 3600 m_t = ((t3 - t0) % 3600) // 60 s_t = ((t3 - t0) % 3600) // 60 print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t))) if args.tensorboard:     writer.close()
  运行结果:
  Tensorboard观察:
  评估源码:
  eval_DP.py 相比与2. (1)导入自己的alexnet.py
  运行结果:
  分析:
  可以看出DP使用autocast训练完20个epochs时需要花费的时间是21分21秒,相比与之前DP没有使用的时间(22分22秒)快了1分1秒
  之前DP未使用amp能达到准确率0.8216,而现在准确率降低到0.8188,说明还是使用自动混合精度加速还是对模型的准确率有所影响,后期可通过增大batch_sizel让运行时间和之前一样,但是准确率上升,来降低此影响 (3)DP使用autocast与GradScaler的训练与评估源码:
  训练源码:
  train_DP_GradScaler.py 导入自己的alexnet.py import time import torch from alexnet import alexnet import torchvision from torch import nn from torch.utils.data import DataLoader from torchvision import transforms from torch.cuda.amp import autocast as autocast from torch.cuda.amp import GradScaler as GradScaler from torch.utils.tensorboard import SummaryWriter import numpy as np import argparse     def parse_args():     parser = argparse.ArgumentParser(description="CV Train")     parser.add_mutually_exclusive_group()     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--tensorboard", type=str, default=True, help="Use tensorboard for loss visualization")     parser.add_argument("--tensorboard_log", type=str, default="../tensorboard", help="tensorboard folder")     parser.add_argument("--cuda", type=str, default=True, help="if is cuda available")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")     parser.add_argument("--epochs", type=int, default=20, help="Number of epochs to train.")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()   # 1.Create SummaryWriter if args.tensorboard:     writer = SummaryWriter(args.tensorboard_log)   # 2.Ready dataset if args.dataset == "CIFAR10":     train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(         [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True) else:     raise ValueError("Dataset is not CIFAR10") cuda = torch.cuda.is_available() print("CUDA available: {}".format(cuda))   # 3.Length train_dataset_size = len(train_dataset) print("the train dataset size is {}".format(train_dataset_size))   # 4.DataLoader train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)   # 5.Create model model = alexnet()   if args.cuda == cuda:     model = model.cuda()     model = torch.nn.DataParallel(model).cuda() else:     model = torch.nn.DataParallel(model)   # 6.Create loss cross_entropy_loss = nn.CrossEntropyLoss()   # 7.Optimizer optim = torch.optim.AdamW(model.parameters(), lr=args.lr) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True) scaler = GradScaler() # 8. Set some parameters to control loop # epoch iter = 0 t0 = time.time() for epoch in range(args.epochs):     t1 = time.time()     print(" -----------------the {} number of training epoch --------------".format(epoch))     model.train()     for data in train_dataloader:         loss = 0         imgs, targets = data         optim.zero_grad()         if args.cuda == cuda:             cross_entropy_loss = cross_entropy_loss.cuda()             imgs, targets = imgs.cuda(), targets.cuda()         with autocast():             outputs = model(imgs)             loss_train = cross_entropy_loss(outputs, targets)             loss = loss_train.item() + loss         if args.tensorboard:             writer.add_scalar("train_loss", loss_train.item(), iter)           scaler.scale(loss_train).backward()         scaler.step(optim)         scaler.update()           iter = iter + 1         if iter % 100 == 0:             print(                 "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "                     .format(epoch, iter, optim.param_groups[0]["lr"], loss_train.item(),                             np.mean(loss)))     if args.tensorboard:         writer.add_scalar("lr", optim.param_groups[0]["lr"], epoch)     scheduler.step(np.mean(loss))     t2 = time.time()     h = (t2 - t1) // 3600     m = ((t2 - t1) % 3600) // 60     s = ((t2 - t1) % 3600) % 60     print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))       if epoch % 1 == 0:         print("Save state, iter: {} ".format(epoch))         torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))   torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint)) t3 = time.time() h_t = (t3 - t0) // 3600 m_t = ((t3 - t0) % 3600) // 60 s_t = ((t3 - t0) % 3600) // 60 print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t))) if args.tensorboard:     writer.close()
  运行结果:
  Tensorboard观察:
  评估源码:
  eval_DP.py 相比与2. (1)导入自己的alexnet.py
  运行结果:
  分析:
  跟之前一样,DP使用了GradScaler放大了损失降低了模型训练的速度
  现在DP使用了autocast与GradScaler的准确率为0.8409,相比与DP只使用autocast准确率0.8188还是有所上升,并且之前DP未使用amp是准确率(0.8216)也提高了不少 3.单进程占用多卡DDP训练与评估代码(1)DDP原模型训练与评估源码:
  训练源码:
  train_DDP.py import time import torch from torchvision.models.alexnet import alexnet import torchvision from torch import nn import torch.distributed as dist from torchvision import transforms from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter import numpy as np import argparse     def parse_args():     parser = argparse.ArgumentParser(description="CV Train")     parser.add_mutually_exclusive_group()     parser.add_argument("--rank", type=int, default=0)     parser.add_argument("--world_size", type=int, default=1)     parser.add_argument("--master_addr", type=str, default="127.0.0.1")     parser.add_argument("--master_port", type=str, default="12355")     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--tensorboard", type=str, default=True, help="Use tensorboard for loss visualization")     parser.add_argument("--tensorboard_log", type=str, default="../tensorboard", help="tensorboard folder")     parser.add_argument("--cuda", type=str, default=True, help="if is cuda available")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")     parser.add_argument("--epochs", type=int, default=20, help="Number of epochs to train.")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()     def train():     dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),                             rank=args.rank,                             world_size=args.world_size)     # 1.Create SummaryWriter     if args.tensorboard:         writer = SummaryWriter(args.tensorboard_log)       # 2.Ready dataset     if args.dataset == "CIFAR10":         train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(             [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)       else:         raise ValueError("Dataset is not CIFAR10")       cuda = torch.cuda.is_available()     print("CUDA available: {}".format(cuda))       # 3.Length     train_dataset_size = len(train_dataset)     print("the train dataset size is {}".format(train_dataset_size))       train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)     # 4.DataLoader     train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, sampler=train_sampler,                                   num_workers=2,                                   pin_memory=True)       # 5.Create model     model = alexnet()       if args.cuda == cuda:         model = model.cuda()         model = torch.nn.parallel.DistributedDataParallel(model).cuda()     else:         model = torch.nn.parallel.DistributedDataParallel(model)       # 6.Create loss     cross_entropy_loss = nn.CrossEntropyLoss()       # 7.Optimizer     optim = torch.optim.AdamW(model.parameters(), lr=args.lr)     scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)       # 8. Set some parameters to control loop     # epoch     iter = 0     t0 = time.time()     for epoch in range(args.epochs):         t1 = time.time()         print(" -----------------the {} number of training epoch --------------".format(epoch))         model.train()         for data in train_dataloader:             loss = 0             imgs, targets = data             if args.cuda == cuda:                 cross_entropy_loss = cross_entropy_loss.cuda()                 imgs, targets = imgs.cuda(), targets.cuda()             outputs = model(imgs)             loss_train = cross_entropy_loss(outputs, targets)             loss = loss_train.item() + loss             if args.tensorboard:                 writer.add_scalar("train_loss", loss_train.item(), iter)               optim.zero_grad()             loss_train.backward()             optim.step()             iter = iter + 1             if iter % 100 == 0:                 print(                     "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "                         .format(epoch, iter, optim.param_groups[0]["lr"], loss_train.item(),                                 np.mean(loss)))         if args.tensorboard:             writer.add_scalar("lr", optim.param_groups[0]["lr"], epoch)         scheduler.step(np.mean(loss))         t2 = time.time()         h = (t2 - t1) // 3600         m = ((t2 - t1) % 3600) // 60         s = ((t2 - t1) % 3600) % 60         print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))           if epoch % 1 == 0:             print("Save state, iter: {} ".format(epoch))             torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))       torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))     t3 = time.time()     h_t = (t3 - t0) // 3600     m_t = ((t3 - t0) % 3600) // 60     s_t = ((t3 - t0) % 3600) // 60     print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))     if args.tensorboard:         writer.close()     if __name__ == "__main__":     local_size = torch.cuda.device_count()     print("local_size: ".format(local_size))     train()
  运行结果:
  Tensorboard观察:
  评估源码:
  eval_DDP.py import torch import torchvision import torch.distributed as dist from torch.utils.data import DataLoader from torchvision.transforms import transforms # from alexnet import alexnet from torchvision.models.alexnet import alexnet import argparse     # eval def parse_args():     parser = argparse.ArgumentParser(description="CV Evaluation")     parser.add_mutually_exclusive_group()     parser.add_argument("--rank", type=int, default=0)     parser.add_argument("--world_size", type=int, default=1)     parser.add_argument("--master_addr", type=str, default="127.0.0.1")     parser.add_argument("--master_port", type=str, default="12355")     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()     def eval():     dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),                             rank=args.rank,                             world_size=args.world_size)     # 1.Create model     model = alexnet()     model = torch.nn.parallel.DistributedDataParallel(model)       # 2.Ready Dataset     if args.dataset == "CIFAR10":         test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,                                                     transform=transforms.Compose(                                                         [transforms.Resize(args.img_size),                                                          transforms.ToTensor()]),                                                     download=True)       else:         raise ValueError("Dataset is not CIFAR10")       # 3.Length     test_dataset_size = len(test_dataset)     print("the test dataset size is {}".format(test_dataset_size))     test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset)       # 4.DataLoader     test_dataloader = DataLoader(dataset=test_dataset, sampler=test_sampler, batch_size=args.batch_size,                                  num_workers=2,                                  pin_memory=True)       # 5. Set some parameters for testing the network     total_accuracy = 0       # test     model.eval()     with torch.no_grad():         for data in test_dataloader:             imgs, targets = data             device = torch.device("cpu")             imgs, targets = imgs.to(device), targets.to(device)             model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)             model.load_state_dict(model_load)             outputs = model(imgs)             outputs = outputs.to(device)             accuracy = (outputs.argmax(1) == targets).sum()             total_accuracy = total_accuracy + accuracy             accuracy = total_accuracy / test_dataset_size         print("the total accuracy is {}".format(accuracy))     if __name__ == "__main__":     local_size = torch.cuda.device_count()     print("local_size: ".format(local_size))     eval()
  运行结果:
  (2)DDP使用autocast的训练与评估源码:
  训练源码:
  train_DDP_autocast.py 导入自己的alexnet.py import time import torch from alexnet import alexnet import torchvision from torch import nn import torch.distributed as dist from torchvision import transforms from torch.utils.data import DataLoader from torch.cuda.amp import autocast as autocast from torch.utils.tensorboard import SummaryWriter import numpy as np import argparse     def parse_args():     parser = argparse.ArgumentParser(description="CV Train")     parser.add_mutually_exclusive_group()     parser.add_argument("--rank", type=int, default=0)     parser.add_argument("--world_size", type=int, default=1)     parser.add_argument("--master_addr", type=str, default="127.0.0.1")     parser.add_argument("--master_port", type=str, default="12355")     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--tensorboard", type=str, default=True, help="Use tensorboard for loss visualization")     parser.add_argument("--tensorboard_log", type=str, default="../tensorboard", help="tensorboard folder")     parser.add_argument("--cuda", type=str, default=True, help="if is cuda available")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")     parser.add_argument("--epochs", type=int, default=20, help="Number of epochs to train.")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()     def train():     dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),                             rank=args.rank,                             world_size=args.world_size)     # 1.Create SummaryWriter     if args.tensorboard:         writer = SummaryWriter(args.tensorboard_log)       # 2.Ready dataset     if args.dataset == "CIFAR10":         train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(             [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)       else:         raise ValueError("Dataset is not CIFAR10")       cuda = torch.cuda.is_available()     print("CUDA available: {}".format(cuda))       # 3.Length     train_dataset_size = len(train_dataset)     print("the train dataset size is {}".format(train_dataset_size))       train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)     # 4.DataLoader     train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, sampler=train_sampler,                                   num_workers=2,                                   pin_memory=True)       # 5.Create model     model = alexnet()       if args.cuda == cuda:         model = model.cuda()         model = torch.nn.parallel.DistributedDataParallel(model).cuda()     else:         model = torch.nn.parallel.DistributedDataParallel(model)       # 6.Create loss     cross_entropy_loss = nn.CrossEntropyLoss()       # 7.Optimizer     optim = torch.optim.AdamW(model.parameters(), lr=args.lr)     scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)       # 8. Set some parameters to control loop     # epoch     iter = 0     t0 = time.time()     for epoch in range(args.epochs):         t1 = time.time()         print(" -----------------the {} number of training epoch --------------".format(epoch))         model.train()         for data in train_dataloader:             loss = 0             imgs, targets = data             if args.cuda == cuda:                 cross_entropy_loss = cross_entropy_loss.cuda()                 imgs, targets = imgs.cuda(), targets.cuda()             with autocast():                 outputs = model(imgs)                 loss_train = cross_entropy_loss(outputs, targets)             loss = loss_train.item() + loss             if args.tensorboard:                 writer.add_scalar("train_loss", loss_train.item(), iter)               optim.zero_grad()             loss_train.backward()             optim.step()             iter = iter + 1             if iter % 100 == 0:                 print(                     "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "                         .format(epoch, iter, optim.param_groups[0]["lr"], loss_train.item(),                                 np.mean(loss)))         if args.tensorboard:             writer.add_scalar("lr", optim.param_groups[0]["lr"], epoch)         scheduler.step(np.mean(loss))         t2 = time.time()         h = (t2 - t1) // 3600         m = ((t2 - t1) % 3600) // 60         s = ((t2 - t1) % 3600) % 60         print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))           if epoch % 1 == 0:             print("Save state, iter: {} ".format(epoch))             torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))       torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))     t3 = time.time()     h_t = (t3 - t0) // 3600     m_t = ((t3 - t0) % 3600) // 60     s_t = ((t3 - t0) % 3600) // 60     print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))     if args.tensorboard:         writer.close()     if __name__ == "__main__":     local_size = torch.cuda.device_count()     print("local_size: ".format(local_size))     train()
  运行结果:
  Tensorboard观察:
  评估源码:
  eval_DDP.py 导入自己的alexnet.py import torch import torchvision import torch.distributed as dist from torch.utils.data import DataLoader from torchvision.transforms import transforms from alexnet import alexnet # from torchvision.models.alexnet import alexnet import argparse     # eval def parse_args():     parser = argparse.ArgumentParser(description="CV Evaluation")     parser.add_mutually_exclusive_group()     parser.add_argument("--rank", type=int, default=0)     parser.add_argument("--world_size", type=int, default=1)     parser.add_argument("--master_addr", type=str, default="127.0.0.1")     parser.add_argument("--master_port", type=str, default="12355")     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()     def eval():     dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),                             rank=args.rank,                             world_size=args.world_size)     # 1.Create model     model = alexnet()     model = torch.nn.parallel.DistributedDataParallel(model)       # 2.Ready Dataset     if args.dataset == "CIFAR10":         test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,                                                     transform=transforms.Compose(                                                         [transforms.Resize(args.img_size),                                                          transforms.ToTensor()]),                                                     download=True)       else:         raise ValueError("Dataset is not CIFAR10")       # 3.Length     test_dataset_size = len(test_dataset)     print("the test dataset size is {}".format(test_dataset_size))     test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset)       # 4.DataLoader     test_dataloader = DataLoader(dataset=test_dataset, sampler=test_sampler, batch_size=args.batch_size,                                  num_workers=2,                                  pin_memory=True)       # 5. Set some parameters for testing the network     total_accuracy = 0       # test     model.eval()     with torch.no_grad():         for data in test_dataloader:             imgs, targets = data             device = torch.device("cpu")             imgs, targets = imgs.to(device), targets.to(device)             model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)             model.load_state_dict(model_load)             outputs = model(imgs)             outputs = outputs.to(device)             accuracy = (outputs.argmax(1) == targets).sum()             total_accuracy = total_accuracy + accuracy             accuracy = total_accuracy / test_dataset_size         print("the total accuracy is {}".format(accuracy))     if __name__ == "__main__":     local_size = torch.cuda.device_count()     print("local_size: ".format(local_size))     eval()
  运行结果:
  分析:
  从DDP未使用amp花费21分21秒,DDP使用autocast花费20分20秒,说明速度提升了
  DDP未使用amp的准确率0.8224,之后DDP使用了autocast准确率下降到0.8162 (3)DDP使用autocast与GradScaler的训练与评估源码
  训练源码:
  train_DDP_GradScaler.py 导入自己的alexnet.py import time import torch from alexnet import alexnet import torchvision from torch import nn import torch.distributed as dist from torchvision import transforms from torch.utils.data import DataLoader from torch.cuda.amp import autocast as autocast from torch.cuda.amp import GradScaler as GradScaler from torch.utils.tensorboard import SummaryWriter import numpy as np import argparse     def parse_args():     parser = argparse.ArgumentParser(description="CV Train")     parser.add_mutually_exclusive_group()     parser.add_argument("--rank", type=int, default=0)     parser.add_argument("--world_size", type=int, default=1)     parser.add_argument("--master_addr", type=str, default="127.0.0.1")     parser.add_argument("--master_port", type=str, default="12355")     parser.add_argument("--dataset", type=str, default="CIFAR10", help="CIFAR10")     parser.add_argument("--dataset_root", type=str, default="../data", help="Dataset root directory path")     parser.add_argument("--img_size", type=int, default=227, help="image size")     parser.add_argument("--tensorboard", type=str, default=True, help="Use tensorboard for loss visualization")     parser.add_argument("--tensorboard_log", type=str, default="../tensorboard", help="tensorboard folder")     parser.add_argument("--cuda", type=str, default=True, help="if is cuda available")     parser.add_argument("--batch_size", type=int, default=64, help="batch size")     parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")     parser.add_argument("--epochs", type=int, default=20, help="Number of epochs to train.")     parser.add_argument("--checkpoint", type=str, default="../checkpoint", help="Save .pth fold")     return parser.parse_args()     args = parse_args()     def train():     dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),                             rank=args.rank,                             world_size=args.world_size)     # 1.Create SummaryWriter     if args.tensorboard:         writer = SummaryWriter(args.tensorboard_log)       # 2.Ready dataset     if args.dataset == "CIFAR10":         train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(             [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)     else:         raise ValueError("Dataset is not CIFAR10")       cuda = torch.cuda.is_available()     print("CUDA available: {}".format(cuda))       # 3.Length     train_dataset_size = len(train_dataset)     print("the train dataset size is {}".format(train_dataset_size))       train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)     # 4.DataLoader     train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, sampler=train_sampler,                                   num_workers=2,                                   pin_memory=True)       # 5.Create model     model = alexnet()       if args.cuda == cuda:         model = model.cuda()         model = torch.nn.parallel.DistributedDataParallel(model).cuda()     else:         model = torch.nn.parallel.DistributedDataParallel(model)       # 6.Create loss     cross_entropy_loss = nn.CrossEntropyLoss()       # 7.Optimizer     optim = torch.optim.AdamW(model.parameters(), lr=args.lr)     scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)     scaler = GradScaler()     # 8. Set some parameters to control loop     # epoch     iter = 0     t0 = time.time()     for epoch in range(args.epochs):         t1 = time.time()         print(" -----------------the {} number of training epoch --------------".format(epoch))         model.train()         for data in train_dataloader:             loss = 0             imgs, targets = data             optim.zero_grad()             if args.cuda == cuda:                 cross_entropy_loss = cross_entropy_loss.cuda()                 imgs, targets = imgs.cuda(), targets.cuda()             with autocast():                 outputs = model(imgs)                 loss_train = cross_entropy_loss(outputs, targets)                 loss = loss_train.item() + loss             if args.tensorboard:                 writer.add_scalar("train_loss", loss_train.item(), iter)               scaler.scale(loss_train).backward()             scaler.step(optim)             scaler.update()               iter = iter + 1             if iter % 100 == 0:                 print(                     "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "                         .format(epoch, iter, optim.param_groups[0]["lr"], loss_train.item(),                                 np.mean(loss)))         if args.tensorboard:             writer.add_scalar("lr", optim.param_groups[0]["lr"], epoch)         scheduler.step(np.mean(loss))         t2 = time.time()         h = (t2 - t1) // 3600         m = ((t2 - t1) % 3600) // 60         s = ((t2 - t1) % 3600) % 60         print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))           if epoch % 1 == 0:             print("Save state, iter: {} ".format(epoch))             torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))       torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))     t3 = time.time()     h_t = (t3 - t0) // 3600     m_t = ((t3 - t0) % 3600) // 60     s_t = ((t3 - t0) % 3600) // 60     print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))     if args.tensorboard:         writer.close()     if __name__ == "__main__":     local_size = torch.cuda.device_count()     print("local_size: ".format(local_size))     train()
  运行结果:
  Tensorboard观察:
  评估源码:
  eval_DDP.py 与3. (2) 一样,导入自己的alexnet.py
  运行结果:
  分析:
  运行起来了,速度也比DDP未使用amp(用时21分21秒)快了不少(用时20分20秒),之前DDP未使用amp准确率到达0.8224,现在DDP使用了autocast与GradScaler的准确率达到0.8252,提升了
  参考:
  1.Pytorch自动混合精度(AMP)训练:https://blog.csdn.net/ytusdc/article/details/122152244
  2.PyTorch分布式训练基础--DDP使用:https://zhuanlan.zhihu.com/p/358974461

大叔透过马路排水孔钓鱼惊呆游客,居民排水孔连着大海极目新闻记者刘毅马路边,大叔拿着鱼竿,随后竟将鱼钩放到了排水孔里,让过路行人没想到的是,这大叔还真钓上了一条海鱼。10月13日,在山东烟台砣矶岛开民宿的赵先生告诉极目新闻记者,那处(新华全媒)陕西山村农屋拥抱民宿新蓝海近年来,陕西省山区农村民宿发展势头迅猛,以品质化风格化的山村民宿为载体的周边游微度假成为旅游市场的新潮流。农村闲置废弃农房通过灵活的合作经营模式改造后,变成一栋栋环境优美各具特色的30多只野生鸳鸯集体到丽水越冬,当地比往年早来两周极目新闻记者余渊赵德龙随着天气逐渐转冷,浙江丽水景宁畲族自治县英川雁湖鸳鸯谷迎来今年首批30多只来越冬的野生鸳鸯。10月13日,景宁当地工作人员告诉极目新闻记者,每年都会有鸳鸯来过明朝那些事儿热爱生活其余不是事儿我是一个比较喜欢历史的人,总想知道过去曾经发生过什么,这也是为什么我这么喜欢老物件儿,喜欢去游览名胜古迹。一想到,脚下的方砖有可能是几百年前,甚至几千年前的古人踩过的,我就觉得很神湛江大文旅时代渐行渐近打造世界级滨海旅游目的地到遂溪角头沙赶海看日落,前往雷州三元塔和雷祖祠等地领略雷州文化,穿梭于红树林中欣赏滨海胜景,与三两好友在吴川吉兆湾露营烧烤国庆长假,湛江旅游逐渐复苏,纳入监测范围的35个景区游客总骑向中国的北极D24从满归镇到漠河2018年7月16日,星期一,农历六月初四。根河市满归镇X324蒙黑界S219漠河市西林吉镇,骑行143公里。漠河天气多云,东北风2级,1525度。受今天有雨要早走的蛊惑,三点多即十大最美公路出炉,恩平这条公路上榜今年8月以来,江门市公路事务中心举办江门市公路系统2022年十大最美公路评选活动,并推选出15条候选公路。近期,江门市公路事务中心在江门日报微信公众号和江门公路微信公众号发起网上投在京东开店需要了解哪些东西,以及如何快速打造爆款电商行业依旧持续火爆,在京东开个店目前很容易。但是大多数人对于新店如何操作可能有很多疑问。今天我要说的不是开店的细节,而是开店后如何选品以及如何推广打造爆款的方式。今天的课程涉及到这十年奋斗在路上之最美职工李杰矿山华佗赶上了好时代中工网记者刘英杰那时,别人轻而易举就能处理的设备故障,我却经常感到力不从心华阳集团一矿机电工区职工李杰对中工网记者说。1997年,李杰从阳煤集团技校采掘电钳专业毕业后成为了一名综采新能源汽车大火,9月份轿车销量前十,比亚迪霸榜估计现在每一个传统车企都非常心焦,原因嘛,就是新能源汽车市场要比预想的还要猛烈一些。根据乘联会的数据显示,9月份新能源国内零售渗透率为31。8,同比提升11个百分点,而在产销量上纷奇葩汽车专利最新盘点气味识别车主,堵车建车主群聊,驱蚊系统判断蚊子性别汽车企业大多执着于深挖专利池,无论现在有没有用,都先注册申请了再说。而随着新能源汽车的发展,汽车专利的数量更是与日俱增。前瞻产业研究院数据显示,中国新能源汽车专利申请量占全球新能源
2022中国车市观察比亚迪禁售燃油车,500万最好SUV均成热点事件2022年接近尾声,回望征途千山远,翘首前路万木春,中国汽车市场在全球范围内起到领航作用,这其中包括了产品的持续创新,品牌的不断向上,以及行业格局的重塑。我们通过媒体视角,观察总结河南安阳积极落实扶持政策为创业者搭建广阔舞台为创业扫清障碍,让梦想根植沃土。安阳市殷都区在推进创业就业工作中不断推出优惠政策,建立激励机制,给予适当补贴,大力支持大学生农民工就业困难群体等重点群体创业就业,通过提升服务彰显诚直击调研京山轻机(000821。SZ)目前已经有异质结设备4GW订单落地明年海外市场是重点发力方向智通财经APP获悉,12月2021日,京山轻机(000821。SZ)在接受机构调研时表示,在电池片设备业务领域,公司目前已经有异质结设备4GW的订单落地,后续也继续会有订单落地TO新能源的北国风采一汽红旗低温动力电池技术展露锋芒电动汽车在中国北方,尤其是东北地区存在严重的冬日续航不足等问题,其根本原因之一是电池在低温下可用电量低充放电功率下降,这些痛点问题一直都是阻碍电动车在北方大面积普及的瓶颈因素。为消网约车市场已饱和,跑网约车越来越难,忙到半夜流水才300多网约车市场越来越饱和,跑网约车也越来越难挣钱了,我就用我这个月18号的跑的流水和花的时间来举例,我是从下午2点20分出车,到晚上收车回到家已经是晚上10点50分左右了,中途连晚饭也今天,金山区新能源汽车零部件产业联盟成立!一件政协提案牵起了一个产业联盟。12月21日,金山区新能源汽车零部件产业联盟成立大会在海阔文化创意园举行。金山区委书记刘健区政协主席刘豫峰副区长葛钧区政协副主席李士权出席活动。会上上汽集团逐步成为新能源体系的全能者中央经济工作会议12月15日至16日在北京举行,会议总结2022年经济工作,分析当前经济形势,部署2023年经济工作。会议再次强调要着力扩大国内需求,增强消费能力,改善消费条件,创特斯拉市值暴跌6000亿!让马斯克下不来台的,不仅是推特投票新智元报道编辑昕朋新智元导读特斯拉股价在2022年下跌57。5,市值蒸发6000亿!忙着改造推特的马院士,还有心思发起投票吗?对马院士来说,这一周可谓一波三折。还没从昨天千万推特用饭店服务员暂时转行送快递,想成为外卖骑手,需要做好哪些准备?日前,包括北京杭州等地的商务局向全社会发出倡议,未到岗或有闲暇时间的市民可自主加入美团饿了么叮咚买菜等平台的外卖骑手行列。杭州市餐饮协会也号召处于放假状态的餐饮企业员工加入各个平台浮亏三年一把翻倍!爆赚2亿后,葛卫东悄悄套现了如果有这么一只股票,买入之后的大部分时间里,都处于亏损状态,你能持有多久?大佬葛卫东用他的西藏药业告诉你三年!如今,在爆赚了2亿元后,葛卫东已经悄悄套现,功成身退。葛卫东与西藏药业黑面饺子现在,即便是在最偏僻的项目上,吃顿饺子那也是很简单的事情。到了冬至,更是要吃饺子,大家围在一起,其乐融融,很有家的感觉。现在的饺子琳琅满目,什么馅的都有。但是大家不会想到,这世上还