如何使用 tensorflow 以编程方式确定可用的 GPU 内存?

Bar*_*den 14 python gpu tensorflow

对于矢量量化 (k-means) 程序,我想知道当前 GPU 上的可用内存量(如果有的话)。这需要选择最佳批量大小,以便在整个数据集上运行尽可能少的批次。

我编写了以下测试程序:

import tensorflow as tf
import numpy as np
from kmeanstf import KMeansTF
print("GPU Available: ", tf.test.is_gpu_available())

nn=1000
dd=250000
print("{:,d} bytes".format(nn*dd*4))
dic = {}
for x in "ABCD":
    dic[x]=tf.random.normal((nn,dd))
    print(x,dic[x][:1,:2])

print("done...")
Run Code Online (Sandbox Code Playgroud)

这是我的系统上的典型输出(ubuntu 18.04 LTS,GTX-1060 6GB)。请注意核心转储。

python misc/maxmem.py 
GPU Available:  True
1,000,000,000 bytes
A tf.Tensor([[-0.23787294 -2.0841186 ]], shape=(1, 2), dtype=float32)
B tf.Tensor([[ 0.23762687 -1.1229591 ]], shape=(1, 2), dtype=float32)
C tf.Tensor([[-1.2672468   0.92139906]], shape=(1, 2), dtype=float32)
2020-01-02 17:35:05.988473: W tensorflow/core/common_runtime/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 953.67MiB (rounded to 1000000000).  Current allocation summary follows.
2020-01-02 17:35:05.988752: W tensorflow/core/common_runtime/bfc_allocator.cc:424] **************************************************************************************************xx
2020-01-02 17:35:05.988835: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at cwise_ops_common.cc:82 : Resource exhausted: OOM when allocating tensor with shape[1000,250000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Segmentation fault (core dumped)
Run Code Online (Sandbox Code Playgroud)

有时我会从 python 而不是核心转储中得到一个错误(见下文)。这实际上会更好,因为我可以捕获它,从而通过反复试验确定最大可用内存。但它与核心转储交替:

python misc/maxmem.py 
GPU Available:  True
1,000,000,000 bytes
A tf.Tensor([[-0.73510283 -0.94611156]], shape=(1, 2), dtype=float32)
B tf.Tensor([[-0.8458411  0.552555 ]], shape=(1, 2), dtype=float32)
C tf.Tensor([[0.30532074 0.266423  ]], shape=(1, 2), dtype=float32)
2020-01-02 17:35:26.401156: W tensorflow/core/common_runtime/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 953.67MiB (rounded to 1000000000).  Current allocation summary follows.
2020-01-02 17:35:26.401486: W tensorflow/core/common_runtime/bfc_allocator.cc:424] **************************************************************************************************xx
2020-01-02 17:35:26.401571: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at cwise_ops_common.cc:82 : Resource exhausted: OOM when allocating tensor with shape[1000,250000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
  File "misc/maxmem.py", line 11, in <module>
    dic[x]=tf.random.normal((nn,dd))
  File "/home/fritzke/miniconda2/envs/tf20b/lib/python3.7/site-packages/tensorflow_core/python/ops/random_ops.py", line 76, in random_normal
    value = math_ops.add(mul, mean_tensor, name=name)
  File "/home/fritzke/miniconda2/envs/tf20b/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 391, in add
    _six.raise_from(_core._status_to_exception(e.code, message), None)
  File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1000,250000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Add] name: random_normal/
Run Code Online (Sandbox Code Playgroud)

我怎样才能可靠地获取运行软件的任何系统的信息?

Bar*_*den 18

我实际上在我的这个老问题中找到了答案 。为了给读者带来一些额外的好处,我测试了提到的程序

import nvidia_smi

nvidia_smi.nvmlInit()

handle = nvidia_smi.nvmlDeviceGetHandleByIndex(0)
# card id 0 hardcoded here, there is also a call to get all available card ids, so we could iterate

info = nvidia_smi.nvmlDeviceGetMemoryInfo(handle)

print("Total memory:", info.total)
print("Free memory:", info.free)
print("Used memory:", info.used)

nvidia_smi.nvmlShutdown()
Run Code Online (Sandbox Code Playgroud)

在 colab 上,结果如下:

Total memory: 17071734784
Free memory: 17071734784
Used memory: 0
Run Code Online (Sandbox Code Playgroud)

从执行中可以看出,我在那里拥有的实际 GPU 是 Tesla P100

!nvidia-smi
Run Code Online (Sandbox Code Playgroud)

并观察输出

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 418.67       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P100-PCIE...  Off  | 00000000:00:04.0 Off |                    0 |
| N/A   32C    P0    26W / 250W |      0MiB / 16280MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
Run Code Online (Sandbox Code Playgroud)

  • 必须运行“pip install nvidia-ml-py3”才能获取 nvidia_smi lib (4认同)

y.s*_*hyk 16

此代码将为每个 GPU 返回以兆字节为单位的可用 GPU 内存:

import subprocess as sp
import os

def get_gpu_memory():
  _output_to_list = lambda x: x.decode('ascii').split('\n')[:-1]

  ACCEPTABLE_AVAILABLE_MEMORY = 1024
  COMMAND = "nvidia-smi --query-gpu=memory.free --format=csv"
  memory_free_info = _output_to_list(sp.check_output(COMMAND.split()))[1:]
  memory_free_values = [int(x.split()[0]) for i, x in enumerate(memory_free_info)]
  print(memory_free_values)
  return memory_free_values

get_gpu_memory()
Run Code Online (Sandbox Code Playgroud)

这个答案依赖于安装的 nvidia-smi(Nvidia GPU 几乎总是如此),因此仅限于 NVidia GPU。