学习过程中CUDA的问题

ann*_*dsf 5 machine-learning neural-network conv-neural-network pytorch

在学习过程中,出现cuda错误:

 CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
Run Code Online (Sandbox Code Playgroud)

我以前没有遇到过这样的问题。在谷歌中我没有找到可以帮助我的解决方案。也许这与我的 GPU 内存很少(2GB)有关

型号代码:

num_epochs = 2
batch_size = 4


for epoch in range(num_epochs):
    # In each epoch, we do a full pass over the training data:
    start_time = time.time()
    model.train(True) # enable dropout / batch_norm training behavior
    for (X_batch, y_batch) in train_batch_gen:
        loss = compute_loss(X_batch, y_batch)
        loss.backward()
        opt.step()
        opt.zero_grad()
        train_loss.append(loss.data.cpu().numpy())
    model.train(False) # disable dropout / use averages for batch_norm
    for X_batch, y_batch in val_batch_gen:
        logits = model(Variable(torch.FloatTensor(X_batch)).cuda())
        y_pred = logits.max(1)[1].data
        val_accuracy.append(np.mean( (y_batch.cpu() == y_pred.cpu()).numpy() ))
Run Code Online (Sandbox Code Playgroud)

追溯:

RuntimeError                              Traceback (most recent call last)
 in 
     11     for (X_batch, y_batch) in train_batch_gen:
     12         # train on batch
---> 13         loss = compute_loss(X_batch, y_batch)
     14         loss.backward()
     15         opt.step()

 in compute_loss(X_batch, y_batch)
      2     X_batch = Variable(torch.FloatTensor(X_batch)).cuda()
      3     y_batch = Variable(torch.LongTensor(y_batch)).cuda()
----> 4     logits = model.cuda()(X_batch)
      5     return F.cross_entropy(logits, y_batch).mean()

c:\program files\python38\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

c:\program files\python38\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
     98     def forward(self, input):
     99         for module in self:
--> 100             input = module(input)
    101         return input
    102 

c:\program files\python38\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

c:\program files\python38\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
     85 
     86     def forward(self, input):
---> 87         return F.linear(input, self.weight, self.bias)
     88 
     89     def extra_repr(self):

c:\program files\python38\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
   1608     if input.dim() == 2 and bias is not None:
   1609         # fused op is marginally faster
-> 1610         ret = torch.addmm(bias, input, weight.t())
   1611     else:
   1612         output = input.matmul(weight.t())

RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
Run Code Online (Sandbox Code Playgroud)