cos*_*ang 11 out-of-memory pytorch
在 PyTorch 中,我编写了一个非常简单的 CNN 判别器并对其进行了训练。现在我需要部署它来进行预测。但目标机器的 GPU 内存较小,并出现内存不足错误。所以我认为我可以设置requires_grad = False
阻止 PyTorch 存储梯度值。但我没有发现它有任何区别。
我的模型中有大约 500 万个参数。但在预测单批输入时,会消耗约 1.2GB 的内存。我想应该不需要这么大的内存。
问题是当我只想使用模型进行预测时如何节省 GPU 内存使用量?
这是一个演示,我用它discriminator.requires_grad_
来禁用/启用所有参数的自动分级。但好像并没有什么用。
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as functional
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
def getMemoryUsage():
usage = nvsmi.DeviceQuery("memory.used")["gpu"][0]["fb_memory_usage"]
return "%d %s" % (usage["used"], usage["unit"])
print("Before GPU Memory: %s" % getMemoryUsage())
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
# trainable layers
# input: 2x256x256
self.conv1 = nn.Conv2d(2, 8, 5, padding=2) # 8x256x256
self.pool1 = nn.MaxPool2d(2) # 8x128x128
self.conv2 = nn.Conv2d(8, 32, 5, padding=2) # 32x128x128
self.pool2 = nn.MaxPool2d(2) # 32x64x64
self.conv3 = nn.Conv2d(32, 96, 5, padding=2) # 96x64x64
self.pool3 = nn.MaxPool2d(4) # 96x16x16
self.conv4 = nn.Conv2d(96, 256, 5, padding=2) # 256x16x16
self.pool4 = nn.MaxPool2d(4) # 256x4x4
self.num_flat_features = 4096
self.fc1 = nn.Linear(4096, 1024)
self.fc2 = nn.Linear(1024, 256)
self.fc3 = nn.Linear(256, 1)
# loss function
self.loss = nn.MSELoss()
# other properties
self.requires_grad = True
def forward(self, x):
y = x
y = self.conv1(y)
y = self.pool1(y)
y = functional.relu(y)
y = self.conv2(y)
y = self.pool2(y)
y = functional.relu(y)
y = self.conv3(y)
y = self.pool3(y)
y = functional.relu(y)
y = self.conv4(y)
y = self.pool4(y)
y = functional.relu(y)
y = y.view((-1,self.num_flat_features))
y = self.fc1(y)
y = functional.relu(y)
y = self.fc2(y)
y = functional.relu(y)
y = self.fc3(y)
y = torch.sigmoid(y)
return y
def predict(self, x, score_th=0.5):
if len(x.shape) == 3:
singlebatch = True
x = x.view([1]+list(x.shape))
else:
singlebatch = False
y = self.forward(x)
label = (y > float(score_th))
if singlebatch:
y = y.view(list(y.shape)[1:])
return label, y
def requires_grad_(self, requires_grad=True):
for parameter in self.parameters():
parameter.requires_grad_(requires_grad)
self.requires_grad = requires_grad
x = torch.cuda.FloatTensor(np.zeros([2, 256, 256]))
discriminator = Discriminator()
discriminator.to("cuda:0")
# comment/uncomment this line to make difference
discriminator.requires_grad_(False)
discriminator.predict(x)
print("Requires grad", discriminator.requires_grad)
print("After GPU Memory: %s" % getMemoryUsage())
Run Code Online (Sandbox Code Playgroud)
通过注释掉该行discriminator.requires_grad_(False)
,我得到了输出:
Before GPU Memory: 6350MiB
Requires grad True
After GPU Memory: 7547MiB
Run Code Online (Sandbox Code Playgroud)
通过取消注释该行,我得到:
Before GPU Memory: 6350MiB
Requires grad False
After GPU Memory: 7543MiB
Run Code Online (Sandbox Code Playgroud)
您可以使用pynvml。
这个 python 工具是 Nvidia 制作的,所以你可以像这样使用 Python 查询:
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
Run Code Online (Sandbox Code Playgroud)
您也可以随时执行:
torch.cuda.empty_cache()
Run Code Online (Sandbox Code Playgroud)
清空缓存,这样您会发现更多的可用内存。
在调用之前,torch.cuda.empty_cache()
如果您有不再使用的对象,您可以这样调用:
obj = None
Run Code Online (Sandbox Code Playgroud)
之后你打电话
gc.collect()
Run Code Online (Sandbox Code Playgroud)
进行预测时尝试在目标机器上使用model.eval()
with 。将模型层切换到 eval 模式。将停用 autograd 引擎,从而减少内存使用量。torch.no_grad()
model.eval()
torch.no_grad()
x = torch.cuda.FloatTensor(np.zeros([2, 256, 256]))
discriminator = Discriminator()
discriminator.to("cuda:0")
discriminator.eval()
with torch.no_grad():
discriminator.predict(x)
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
20780 次 |
最近记录: |