Dmi*_*sil 7 python multi-gpu python-3.x tensorflow
我创建了 3 个虚拟 GPU(有 1 个 GPU)并尝试加速图像的矢量化。但是,使用下面提供的代码并从文档(这里)中手动放置我得到了奇怪的结果:在所有 GPU 上的训练比在单个 GPU 上慢两倍。还要在具有 3 个物理 GPU 的机器上检查此代码(并删除虚拟设备初始化) - 工作方式相同。
环境:Python 3.6、Ubuntu 18.04.3、tensorflow-gpu 1.14.0。
代码(此示例创建 3 个虚拟设备,您可以在具有一个 GPU 的 PC 上对其进行测试):
import os
import time
import numpy as np
import tensorflow as tf
start = time.time()
def load_graph(frozen_graph_filename):
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we import the graph_def into a new Graph and returns it
with tf.Graph().as_default() as graph:
# The name var will prefix every op/nodes in your graph
# Since we load everything in a new graph, this is not needed
tf.import_graph_def(graph_def, name="")
return graph
path_to_graph = '/imagenet/' # Path to imagenet folder where graph file is placed
GRAPH = load_graph(os.path.join(path_to_graph, 'classify_image_graph_def.pb'))
# Create Session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9
config.gpu_options.allow_growth = True
session = tf.Session(graph=GRAPH, config=config)
output_dir = '/vectors/' # where to saved vectors from images
# Single GPU vectorization
for image_index, image in enumerate(selected_list):
with Image.open(image) as f:
image_data = f.convert('RGB')
feature_tensor = session.graph.get_tensor_by_name('pool_3:0')
feature_vector = session.run(feature_tensor, {'DecodeJpeg:0': image_data})
feature_vector = np.squeeze(feature_vector)
outfile_name = os.path.basename(image) + ".vc"
out_path = os.path.join(output_dir, outfile_name)
# Save vector
np.savetxt(out_path, feature_vector, delimiter=',')
print(f"Single GPU: {time.time() - start}")
start = time.time()
print("Start calculation on multiple GPU")
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Create 3 virtual GPUs with 1GB memory each
try:
tf.config.experimental.set_virtual_device_configuration(
gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
print("Create prepared ops")
start1 = time.time()
gpus = logical_gpus # comment this line to use physical GPU devices for calculations
image_list = ['1.jpg', '2.jpg', '3.jpg'] # list with images to vectorize (tested on 100 and 1000 examples)
# Assign chunk of list to each GPU
# image_list1, image_list2, image_list3 = image_list[:len(image_list)],\
# image_list[len(image_list):2*len(image_list)],\
# image_list[2*len(image_list):]
selected_list = image_list # commit this line if you want to try to assign chunk of list manually to each GPU
output_vectors = []
if gpus:
# Replicate your computation on multiple GPUs
feature_vectors = []
for gpu in gpus: # iterating on a virtual GPU devices, not physical
with tf.device(gpu.name):
print(f"Assign list of images to {gpu.name.split(':', 4)[-1]}")
# Try to assign chunk of list with images to each GPU - work the same time as single GPU
# if gpu.name.split(':', 4)[-1] == "GPU:0":
# selected_list = image_list1
# if gpu.name.split(':', 4)[-1] == "GPU:1":
# selected_list = image_list2
# if gpu.name.split(':', 4)[-1] == "GPU:2":
# selected_list = image_list3
for image_index, image in enumerate(selected_list):
with Image.open(image) as f:
image_data = f.convert('RGB')
feature_tensor = session.graph.get_tensor_by_name('pool_3:0')
feature_vector = session.run(feature_tensor, {'DecodeJpeg:0': image_data})
feature_vectors.append(feature_vector)
print("All images has been assigned to GPU's")
print(f"Time spend on prep ops: {time.time() - start1}")
print("Start calculation on multiple GPU")
start1 = time.time()
for image_index, image in enumerate(image_list):
feature_vector = np.squeeze(feature_vectors[image_index])
outfile_name = os.path.basename(image) + ".vc"
out_path = os.path.join(output_dir, outfile_name)
# Save vector
np.savetxt(out_path, feature_vector, delimiter=',')
# Close session
session.close()
print(f"Calc on GPU's spend: {time.time() - start1}")
print(f"All time, spend on multiple GPU: {time.time() - start}")
Run Code Online (Sandbox Code Playgroud)
提供输出视图(来自包含 100 个图像的列表):
1 Physical GPU, 3 Logical GPUs
Single GPU: 18.76301646232605
Start calculation on multiple GPU
Create prepared ops
Assign list of images to GPU:0
Assign list of images to GPU:1
Assign list of images to GPU:2
All images has been assigned to GPU's
Time spend on prep ops: 18.263537883758545
Start calculation on multiple GPU
Calc on GPU's spend: 11.697082042694092
All time, spend on multiple GPU: 29.960679531097412
Run Code Online (Sandbox Code Playgroud)
我尝试过的:将带有图像的列表分成 3 个块并将每个块分配给 GPU(请参阅提交的代码行)。这将多 GPU 时间减少到 17 秒,这比单 GPU 运行 18 秒(~5%)快一点。
预期结果:MultiGPU 版本比 singleGPU 版本更快(至少 1.5 倍加速)。
想法,为什么会发生:我以错误的方式编写了计算
有两个基本的误解会导致您遇到麻烦:
with tf.device(...):适用于在范围内创建的图节点,而不是Session.run调用。
Session.run是阻塞调用。它们不是并行运行的。TensorFlow 只能并行化单个Session.run.
现代 TF (>= 2.0) 可以使这更容易。
主要是你可以停止使用tf.Session和tf.Graph。使用@tf.function相反,我相信这个基本结构将工作:
@tf.function
def my_function(inputs, gpus, model):
results = []
for input, gpu in zip(inputs, gpus):
with tf.device(gpu):
results.append(model(input))
return results
Run Code Online (Sandbox Code Playgroud)
但是你会想要尝试一个更现实的测试。仅使用 3 张图像,您根本无法衡量实际性能。
另请注意:
tf.distribute.Strategy班可以帮助简化一些这方面,通过分离从设备规格@tf.function多数民众赞成正在运行。strategy.experimental_run_v2(my_function, args=(dataset_inputs,))tf.data.Dataset输入管道将帮助您将加载/预处理与模型执行重叠。但是,如果您真的打算使用它来执行此操作,tf.Graph并且tf.Session我认为您基本上需要从此重新组织您的代码:
# Your code:
# Builds a graph
graph = build_graph()
for gpu in gpus():
with tf.device(gpu):
# Calls `gpu` in each device scope.
session.run(...)
Run Code Online (Sandbox Code Playgroud)
对此:
g = tf.Graph()
with g.as_default():
results = []
for gpu in gpus:
# Build the graph, on each device
input = iterator.get_next()
with tf.device(gpu):
results.append(my_function(input))
# Use a single `Session.run` call
np_result = session.run(results, feed_dict={inputs: my_inputs})
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1937 次 |
| 最近记录: |