TensorFlow中的MemoryError; 并且"使用xen从SysFS读取的成功NUMA节点具有负值(-1)"

Ste*_*hen 6 python-3.x deep-learning tensorflow

我正在使用张量流版本:

0.12.1

Cuda工具集版本是8.

lrwxrwxrwx  1 root root   19 May 28 17:27 cuda -> /usr/local/cuda-8.0
Run Code Online (Sandbox Code Playgroud)

如此处所述 ,我已经下载并安装了cuDNN.但是从我的python脚本执行以下行时,我收到标题中提到的错误消息:

  model.fit_generator(train_generator,
   steps_per_epoch= len(train_samples),
   validation_data=validation_generator, 
   validation_steps=len(validation_samples),
   epochs=9)
Run Code Online (Sandbox Code Playgroud)

详细错误消息如下:

Using TensorFlow backend. 
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally 
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally 
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally 
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally 
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally 
Epoch 1/9 Exception in thread Thread-1: Traceback (most recent call last):   File " lib/python3.5/threading.py", line 914, in _bootstrap_inner
    self.run()   File " lib/python3.5/threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)   File " lib/python3.5/site-packages/keras/engine/training.py", line 612, in data_generator_task
    generator_output = next(self._generator) StopIteration

I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), 
 but there must be at least one NUMA node, so returning NUMA node zero 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] 
Found device 0 with properties: name: GRID K520 major: 3 minor: 0 memoryClockRate (GHz) 0.797 pciBusID 0000:00:03.0 Total memory: 3.94GiB Free memory:
3.91GiB 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] 
 Creating TensorFlow device (/gpu:0) -> (device: 0, name: GRID K520, pci bus id: 0000:00:03.0) 
Traceback (most recent call last):   File "model_new.py", line 82, in <module>
    model.fit_generator(train_generator, steps_per_epoch= len(train_samples),validation_data=validation_generator, validation_steps=len(validation_samples),epochs=9)   File " lib/python3.5/site-packages/keras/legacy/interfaces.py", line 88, in wrapper
    return func(*args, **kwargs)   File " lib/python3.5/site-packages/keras/models.py", line 1110, in fit_generator
    initial_epoch=initial_epoch)   File " lib/python3.5/site-packages/keras/legacy/interfaces.py", line 88, in wrapper
    return func(*args, **kwargs)   File " lib/python3.5/site-packages/keras/engine/training.py", line 1890, in fit_generator
    class_weight=class_weight)   File " lib/python3.5/site-packages/keras/engine/training.py", line 1633, in train_on_batch
    outputs = self.train_function(ins)   File " lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2229, in __call__
    feed_dict=feed_dict)   File " lib/python3.5/site-packages/tensorflow/python/client/session.py", line 766, in run
    run_metadata_ptr)   File " lib/python3.5/site-packages/tensorflow/python/client/session.py", line 937, in _run
    np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)   File " lib/python3.5/site-packages/numpy/core/numeric.py", line 531, in asarray
    return array(a, dtype, copy=False, order=order) MemoryError
Run Code Online (Sandbox Code Playgroud)

如果有任何建议,以解决此错误.

编辑: 问题是致命的.

uname -a
Linux ip-172-31-76-109 4.4.0-78-generic #99-Ubuntu SMP
Thu Apr 27 15:29:09 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

sudo lshw -short
[sudo] password for carnd:
H/W path    Device  Class      Description
==========================================
                    system     HVM domU
/0                  bus        Motherboard
/0/0                memory     96KiB BIOS
/0/401              processor  Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
/0/402              processor  CPU
/0/403              processor  CPU
/0/404              processor  CPU
/0/405              processor  CPU
/0/406              processor  CPU
/0/407              processor  CPU
/0/408              processor  CPU
/0/1000             memory     15GiB System Memory
/0/1000/0           memory     15GiB DIMM RAM
/0/100              bridge     440FX - 82441FX PMC [Natoma]
/0/100/1            bridge     82371SB PIIX3 ISA [Natoma/Triton II]
/0/100/1.1          storage    82371SB PIIX3 IDE [Natoma/Triton II]
/0/100/1.3          bridge     82371AB/EB/MB PIIX4 ACPI
/0/100/2            display    GD 5446
/0/100/3            display    GK104GL [GRID K520]
/0/100/1f           generic    Xen Platform Device
/1          eth0    network    Ethernet interface
Run Code Online (Sandbox Code Playgroud)

编辑2:

这是Amazon云中的EC2实例.并且所有文件的值为-1.

:/sys$ find . -name numa_node -exec cat '{}' \;
find: ‘./fs/fuse/connections/39’: Permission denied
-1
-1
-1
-1
-1
-1
-1
find: ‘./kernel/debug’: Permission denied
Run Code Online (Sandbox Code Playgroud)

EDIT3: 更新numa_nod文件后,NUMA相关错误消失.但是上面列出的所有其他错误仍然存​​在.我又一次致命的错误.

Using TensorFlow backend.
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally
Epoch 1/9
Exception in thread Thread-1:
Traceback (most recent call last):
  File " lib/python3.5/threading.py", line 914, in _bootstrap_inner
    self.run()
  File " lib/python3.5/threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)
  File " lib/python3.5/site-packages/keras/engine/training.py", line 612, in data_generator_task
    generator_output = next(self._generator)
StopIteration

I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GRID K520
major: 3 minor: 0 memoryClockRate (GHz) 0.797
pciBusID 0000:00:03.0
Total memory: 3.94GiB
Free memory: 3.91GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GRID K520, pci bus id: 0000:00:03.0)
Traceback (most recent call last):
  File "model_new.py", line 85, in <module>
    model.fit_generator(train_generator, steps_per_epoch= len(train_samples),validation_data=validation_generator, validation_steps=len(validation_samples),epochs=9)
  File " lib/python3.5/site-packages/keras/legacy/interfaces.py", line 88, in wrapper
    return func(*args, **kwargs)
  File " lib/python3.5/site-packages/keras/models.py", line 1110, in fit_generator
    initial_epoch=initial_epoch)
  File " lib/python3.5/site-packages/keras/legacy/interfaces.py", line 88, in wrapper
    return func(*args, **kwargs)
  File " lib/python3.5/site-packages/keras/engine/training.py", line 1890, in fit_generator
    class_weight=class_weight)
  File " lib/python3.5/site-packages/keras/engine/training.py", line 1633, in train_on_batch
    outputs = self.train_function(ins)
  File " lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2229, in __call__
    feed_dict=feed_dict)
  File " lib/python3.5/site-packages/tensorflow/python/client/session.py", line 766, in run
    run_metadata_ptr)
  File " lib/python3.5/site-packages/tensorflow/python/client/session.py", line 937, in _run
    np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
  File " lib/python3.5/site-packages/numpy/core/numeric.py", line 531, in asarray
    return array(a, dtype, copy=False, order=order)
MemoryError
Run Code Online (Sandbox Code Playgroud)

nor*_*ius 27

这修改了接受的答案

令人烦恼的是,numa_node每次系统重新启动时,该设置都会重置(为值-1)。要更持久地解决此问题,您可以创建一个 crontab(以 root 身份)。

以下步骤对我有用:

# 1) Identify the PCI-ID (with domain) of your GPU
#    For example: PCI_ID="0000.81:00.0"
lspci -D | grep NVIDIA
# 2) Add a crontab for root
sudo crontab -e
#    Add the following line
@reboot (echo 0 | tee -a "/sys/bus/pci/devices/<PCI_ID>/numa_node")
Run Code Online (Sandbox Code Playgroud)

这保证了每次重新启动时 GPU 设备的 NUMA 关联性都设置为 0。

再次请记住,这只是一个“浅层”修复,因为 Nvidia 驱动程序并没有意识到这一点:

nvidia-smi topo -m
#       GPU0  CPU Affinity  NUMA Affinity
# GPU0     X  0-127         N/A
Run Code Online (Sandbox Code Playgroud)

  • @normanius,“Nvidia 驱动程序不知道这一点”:这样做的实际后果是什么,找到解决方案是否重要? (7认同)

osg*_*sgx 14

有代码打印消息"从SysFS读取的成功NUMA节点具有负值(-1)",并且它不是致命错误,它只是警告.真正的错误MemoryError在你的身上File "model_new.py", line 85, in <module>.我们需要更多来源来检查此错误.尝试使您的模型更小或在具有更多RAM的服务器上运行.


关于NUMA节点警告:

https://github.com/tensorflow/tensorflow/blob/e4296aefff97e6edd3d7cee9a09b9dd77da4c034/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc#L855

// Attempts to read the NUMA node corresponding to the GPU device's PCI bus out
// of SysFS. Returns -1 if it cannot...
static int TryToReadNumaNode(const string &pci_bus_id, int device_ordinal) 
{...
  string filename =
      port::Printf("/sys/bus/pci/devices/%s/numa_node", pci_bus_id.c_str());
  FILE *file = fopen(filename.c_str(), "r");
  if (file == nullptr) {
    LOG(ERROR) << "could not open file to read NUMA node: " << filename
               << "\nYour kernel may have been built without NUMA support.";
    return kUnknownNumaNode;
  } ...
  if (port::safe_strto32(content, &value)) {
    if (value < 0) {  // See http://b/18228951 for details on this path.
      LOG(INFO) << "successful NUMA node read from SysFS had negative value ("
                << value << "), but there must be at least one NUMA node"
                            ", so returning NUMA node zero";
      fclose(file);
      return 0;
    }
Run Code Online (Sandbox Code Playgroud)

TensorFlow能够打开/sys/bus/pci/devices/%s/numa_node文件,其中%s是GPU PCI卡的ID(string pci_bus_id = CUDADriver::GetPCIBusID(device_)).您的PC不是多插槽的,只有单个CPU插槽,安装了8核Xeon E5-2670,因此该ID应该为'0'(单个NUMA节点在Linux中编号为0),但错误消息显示它是-1这个文件中的值!

所以,我们知道sysfs已经挂载/sys,有numa_node特殊文件,你的Linux内核config(zgrep NUMA /boot/config* /proc/config*)中启用了CONFIG_NUMA .实际上它已启用:CONFIG_NUMA=y- 在你的x86_64 4.4.0-78通用内核deb中

特殊文件numa_node记录在https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-pci(您的PC的ACPI错误吗?)

What:       /sys/bus/pci/devices/.../numa_node
Date:       Oct 2014
Contact:    Prarit Bhargava <prarit@redhat.com>
Description:
        This file contains the NUMA node to which the PCI device is
        attached, or -1 if the node is unknown.  The initial value
        comes from an ACPI _PXM method or a similar firmware
        source.  If that is missing or incorrect, this file can be
        written to override the node.  In that case, please report
        a firmware bug to the system vendor.  Writing to this file
        taints the kernel with TAINT_FIRMWARE_WORKAROUND, which
        reduces the supportability of your system.
Run Code Online (Sandbox Code Playgroud)

这个错误有快速(kludge)解决方法:找到numa_node你的GPU并使用root帐户在每次启动后执行此命令,其中NNNNN是卡的PCI ID(在lspci输出和/sys/bus/pci/devices/目录中搜索)

echo 0 | sudo tee -a /sys/bus/pci/devices/NNNNN/numa_node
Run Code Online (Sandbox Code Playgroud)

或者只是将它回显到每个这样的文件中,它应该是相当安全的:

for a in /sys/bus/pci/devices/*; do echo 0 | sudo tee -a $a/numa_node; done
Run Code Online (Sandbox Code Playgroud)

你的lshw节目也表明它不是PC,而是Xen虚拟客户.Xen平台(ACPI)仿真与Linux PCI总线NUMA支持代码之间存在一些问题.

  • 有没有正确的方法来永久设置有效的 numa 节点?上面的解决方法对我来说效果很好,但是每次都手动设置它很不舒服。 (9认同)
  • 无需设置整个系统即可识别“numa_node”文件(可以编写脚本)的路径的更简单方法是使用“echo 0 |” tee /sys/module/nvidia/drivers/pci:nvidia/*/numa_node`。请注意,您也不需要“for”循环。这就是“tee”的用途。 (5认同)