PyTorch RuntimeError: DataLoader worker (pid(s) 15332) 意外退出

ihd*_*hdv 8 python python-3.x pytorch

我是 PyTorch 的初学者,我只是在此网页上尝试一些示例。但由于此错误,我似乎无法运行“super_resolution”程序:

RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly

我在网上搜索,发现有人建议设置num_workers0. 但是如果我这样做,程序会告诉我内存不足(CPU 或 GPU):

RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9663676416 bytes. Buy new RAM!

或者

RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 0 bytes free; 2.03 GiB reserved in total by PyTorch)

我该如何解决?


我在 Win10(64 位)和 pytorch 1.4.0 上使用 python 3.8。


更完整的错误信息(--cuda意味着使用GPU,--threads x意味着传递xnum_worker参数):

  1. 带命令行参数 --upscale_factor 1 --cuda
  File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 761, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "E:\Python38\lib\multiprocessing\queues.py", line 108, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "Z:\super_resolution\main.py", line 81, in <module>
    train(epoch)
  File "Z:\super_resolution\main.py", line 48, in train
    for iteration, batch in enumerate(training_data_loader, 1):
  File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__
    data = self._next_data()
  File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 841, in _next_data
    idx, data = self._get_data()
  File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 808, in _get_data
    success, data = self._try_get_data()
  File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 774, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 16596, 9376, 12756, 9844) exited unexpectedly
Run Code Online (Sandbox Code Playgroud)
  1. 带命令行参数 --upscale_factor 1 --cuda --threads 0
  File "Z:\super_resolution\main.py", line 81, in <module>
    train(epoch)
  File "Z:\super_resolution\main.py", line 52, in train
    loss = criterion(model(input), target)
  File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "Z:\super_resolution\model.py", line 21, in forward
    x = self.relu(self.conv2(x))
  File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward
    return self.conv2d_forward(input, self.weight)
  File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 341, in conv2d_forward
    return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 954.35 MiB free; 2.03 GiB reserved in total by PyTorch)
Run Code Online (Sandbox Code Playgroud)

Ane*_*n K 32

这是对我有用的解决方案。它可能适用于其他 Windows 用户。只需删除/注释即可num workers禁用并行加载

  • 设置“numworkers=0”通常应该给你一个更好的跟踪,并且如果错误仍然存​​在,实际上会告诉你出了什么问题,或者在较慢的机器上提供平稳但较慢的非并行运行 (3认同)

ccl*_*ccl 8

没有针对 GPU 内存不足错误的“完整”解决方案,但是您可以采取很多措施来减轻内存需求。另外,请确保您没有同时将训练集和测试集传递给 GPU!

  1. 将批量大小减少到 1
  2. 降低全连接层的维度(它们是最占用内存的)
  3. (图像数据)应用中心裁剪
  4. (图像数据)将 RGB 数据转换为灰度
  5. (文本数据)在 n 个字符处截断输入(这可能没有多大帮助)

或者,您可以尝试在 Google Colaboratory(K80 GPU 上的 12 小时使用限制)和 Next Journal 上运行,两者都提供高达 12GB 的免费使用。最坏的情况是,您可能必须对 CPU 进行培训。希望这可以帮助!