don*_*njy 7 python tensorflow tensorflow2.0
在tensorflow1.x 中,有一个选项,如use_unified_memory和per_process_gpu_memory_fraction有可能触发使用的 CUDA UVM。但是如何在tensorflow2.0 中做到这一点呢?
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/config.proto
// If true, uses CUDA unified memory for memory allocations. If
// per_process_gpu_memory_fraction option is greater than 1.0, then unified
// memory is used regardless of the value for this field. See comments for
// per_process_gpu_memory_fraction field for more details and requirements
// of the unified memory. This option is useful to oversubscribe memory if
// multiple processes are sharing a single GPU while individually using less
// than 1.0 per process memory fraction.
bool use_unified_memory = 2;
Run Code Online (Sandbox Code Playgroud)
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 2
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
Run Code Online (Sandbox Code Playgroud)
小智 3
如果有人希望在 1.x 中启用 UVM,只需将per_process_gpu_memory_fractionover 1 设置为您想要的任何数字。
use_unified_memory没有做任何事情。
TensorFlow 的另一个潜在错误:您可能希望在建立会话之后将模型定义移至该位置。喜欢
with tf.Session(GPUOptions...) as s:
model = xxx
s.run(model)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3515 次 |
| 最近记录: |