Pytorch 断言错误:未在启用 CUDA 的情况下编译 Torch

Lak*_*rma 11 python deep-learning pytorch

我正在尝试从此repo运行代码。我通过更改 main.py 中的第 39/40 行来禁用 cuda

parser.add_argument('--type', default='torch.cuda.FloatTensor', help='type of tensor - e.g torch.cuda.HalfTensor')
Run Code Online (Sandbox Code Playgroud)

parser.add_argument('--type', default='torch.FloatTensor', help='type of tensor - e.g torch.HalfTensor')
Run Code Online (Sandbox Code Playgroud)

尽管如此,运行代码给了我以下异常:

Traceback (most recent call last):
  File "main.py", line 190, in <module>
    main()
  File "main.py", line 178, in main
    model, train_data, training=True, optimizer=optimizer)
  File "main.py", line 135, in forward
    for i, (imgs, (captions, lengths)) in enumerate(data):
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 201, in __next__
    return self._process_next_batch(batch)
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 221, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
AssertionError: Traceback (most recent call last):
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 62, in _pin_memory_loop
    batch = pin_memory_batch(batch)
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 123, in pin_memory_batch
    return [pin_memory_batch(sample) for sample in batch]
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 123, in <listcomp>
    return [pin_memory_batch(sample) for sample in batch]
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 117, in pin_memory_batch
    return batch.pin_memory()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/tensor.py", line 82, in pin_memory
    return type(self)().set_(storage.pin_memory()).view_as(self)
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/storage.py", line 83, in pin_memory
    allocator = torch.cuda._host_allocator()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 220, in _host_allocator
    _lazy_init()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 84, in _lazy_init
    _check_driver()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 51, in _check_driver
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Run Code Online (Sandbox Code Playgroud)

花了一些时间查看 Pytorch github 中的问题,无济于事。请帮忙?

Was*_*mad 6

如果你查看data.py文件,你可以看到这个函数:

\n\n
def get_iterator(data, batch_size=32, max_length=30, shuffle=True, num_workers=4, pin_memory=True):\n    cap, vocab = data\n    return torch.utils.data.DataLoader(\n        cap,\n        batch_size=batch_size, shuffle=shuffle,\n        collate_fn=create_batches(vocab, max_length),\n        num_workers=num_workers, pin_memory=pin_memory)\n
Run Code Online (Sandbox Code Playgroud)\n\n

在main.py文件中调用两次以获取训练数据和开发数据的迭代器。如果你在pytorch中看到DataLoader类,有一个参数叫做:

\n\n
\n

pin_memory (bool, 可选) \xe2\x80\x93 如果为 True,则数据加载器将在返回张量之前将张量复制到 CUDA 固定内存中。

\n
\n\n

True这是函数中默认的get_iterator。结果你会得到这个错误。您可以像调用函数时一样简单地传递pin_memory参数值,如下所示。Falseget_iterator

\n\n
train_data = get_iterator(get_coco_data(vocab, train=True),\n                          batch_size=args.batch_size,\n                          ...,\n                          ...,\n                          ...,\n                          pin_memory=False)\n
Run Code Online (Sandbox Code Playgroud)\n


小智 6

.cuda()在 macOS 上删除对我有用。


小智 5

就我而言,我没有在 Anaconda 环境中安装启用了 Cuda 的 PyTorch。请注意,您需要启用 CUDA 的 GPU 才能正常工作。

点击此链接为您拥有的特定 Cuda 版本安装 PyTorch:https://pytorch.org/get-started/locally/

就我而言,我安装了这个版本: conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch