在 PyCharm 中使用 GPU 支持运行 Tensorflow 时内存不足

lin*_*ing 4 python pycharm keras tensorflow

我的代码在 iPython 终端中运行时运行良好,但由于内存不足错误而失败,如下所示。

/home/abigail/anaconda3/envs/tf_gpuenv/bin/python -Xms1280m -Xmx4g /home/abigail/PycharmProjects/MLNN/src/test.py
Using TensorFlow backend.
Epoch 1/150
2019-01-19 22:12:39.539156: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-01-19 22:12:39.588899: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-01-19 22:12:39.589541: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 750 Ti major: 5 minor: 0 memoryClockRate(GHz): 1.0845
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 59.69MiB
2019-01-19 22:12:39.589552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
Traceback (most recent call last):
  File "/home/abigail/PycharmProjects/MLNN/src/test.py", line 20, in <module>
    model.fit(X, Y, epochs=150, batch_size=10)
  File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/keras/engine/training.py", line 1039, in fit
    validation_steps=validation_steps)
  File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/keras/engine/training_arrays.py", line 199, in fit_loop
    outs = f(ins_batch)
  File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2697, in __call__
    if hasattr(get_session(), '_make_callable_from_options'):
  File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 186, in get_session
    _SESSION = tf.Session(config=config)
  File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1551, in __init__
    super(Session, self).__init__(target, graph, config=config)
  File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 676, in __init__
    self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory

Process finished with exit code 1
Run Code Online (Sandbox Code Playgroud)

在 PyCharm 中,我首先编辑了“Help->Edit Custom VM options”:

-Xms1280m
-Xmx4g
Run Code Online (Sandbox Code Playgroud)

这不能解决问题。然后我编辑了“运行->编辑配置->解释器选项”:

-Xms1280m -Xmx4g
Run Code Online (Sandbox Code Playgroud)

它仍然给出相同的错误。我的桌面 Linux 有足够的内存(64G)。如何解决这个问题?

顺便说一句,在 PyCharm 中,如果我不使用 GPU,它不会给出错误。

编辑:

In [5]: exit                                                                                                                                                                                                                                                                                                                    
(tf_gpuenv) abigail@abigail-XPS-8910:~/nlp/MLMastery/DLwithPython/code/chapter_07$ nvidia-smi
Sun Jan 20 00:41:49 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 415.25       Driver Version: 415.25       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 750 Ti  Off  | 00000000:01:00.0  On |                  N/A |
| 38%   54C    P0     2W /  38W |   1707MiB /  1993MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0       770      G   /usr/bin/akonadi_archivemail_agent             2MiB |
|    0       772      G   /usr/bin/akonadi_sendlater_agent               2MiB |
|    0       774      G   /usr/bin/akonadi_mailfilter_agent              2MiB |
|    0      1088      G   /usr/lib/xorg/Xorg                           166MiB |
|    0      1440      G   kwin_x11                                      60MiB |
|    0      1446      G   /usr/bin/krunner                               1MiB |
|    0      1449      G   /usr/bin/plasmashell                          60MiB |
|    0      1665      G   ...quest-channel-token=3687002912233960986   137MiB |
|    0     20728      C   ...ail/anaconda3/envs/tf_gpuenv/bin/python  1255MiB |
+-----------------------------------------------------------------------------+
Run Code Online (Sandbox Code Playgroud)

Iam*_*lie 8

根据评论结束我们的谈话,我不相信您可以将 GPU 内存或桌面内存分配给 GPU - 不是以您尝试的方式。当你只有一个 GPU 时,Tensorflow-GPU 在大多数情况下会为它运行的任务分配大约 95% 的可用内存。在您的情况下,Something 已经消耗了所有可用的 GPU 内存,这是您的程序无法运行的主要原因。您需要查看 GPU 的内存使用情况并释放一些内存(我不禁想到您已经有了另一个使用 Tensorflow GPU 在后台运行的 Python 实例或其他一些密集型 GPU 程序)。在 Linux 中,命令行nvidia-smi上的命令会告诉你什么使用了你的 GPU,这里是一个例子

Sun Jan 20 18:23:35 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.130                Driver Version: 384.130                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 970     Off  | 00000000:01:00.0 Off |                  N/A |
| 32%   63C    P2    69W / 163W |   3823MiB /  4035MiB |     40%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      3019      C   ...e/scarter/anaconda3/envs/tf1/bin/python  3812MiB |
+-----------------------------------------------------------------------------+
Run Code Online (Sandbox Code Playgroud)

你可以看到我的服务器上的卡有 4035MB 或 RAM,正在使用 3823MB。此外,请查看底部的 GPU 流程。进程 PID 3019 消耗了卡上可用的 4035MB 中的 3812MB。如果我们想使用 tensorflow 运行另一个 python 脚本,我有两个主要选择,我可以安装第二个 GPU 并在第二个 GPU 上运行,或者如果没有可用的 GPU,则在 CPU 上运行。比我更专业的人可能会说,你可以只为每个任务分配一半的内存,但是 2Gig 的内存对于 tensorflow 训练来说已经很低了。通常建议使用具有更多内存(6 gig +)的卡来完成该任务。
最后,找出消耗所有显卡内存的原因并结束该任务。我相信它会解决你的问题。