Kon*_*kin 2 performance deep-learning tensorflow
我的CNN比较小
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(input_shape=(400,400,3), filters=6, kernel_size=5, padding='same', activation='relu'),
tf.keras.layers.Conv2D(filters=12, kernel_size=3, padding='same', activation='relu'),
tf.keras.layers.Conv2D(filters=24, kernel_size=3, strides=2, padding='valid', activation='relu'),
tf.keras.layers.Conv2D(filters=32, kernel_size=3, strides=2, padding='valid', activation='relu'),
tf.keras.layers.Conv2D(filters=48, kernel_size=3, strides=2, padding='valid', activation='relu'),
tf.keras.layers.Conv2D(filters=64, kernel_size=3, strides=2, padding='valid', activation='relu'),
tf.keras.layers.Conv2D(filters=96, kernel_size=3, strides=2, padding='valid', activation='relu'),
tf.keras.layers.Conv2D(filters=128, kernel_size=3, strides=2, padding='valid', activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(240, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy')
Run Code Online (Sandbox Code Playgroud)
我使用以下代码来衡量模型性能:
for img_per_batch in [1, 5, 10, 50]:
# warm up the model
image = np.random.random(size=(img_per_batch, 400, 400, 3)).astype('float32')
model(image, training=False)
n_iter = 100
start_time = time.time()
for _ in range(n_iter):
image = np.random.random(size=(img_per_batch, 400, 400, 3)).astype('float32')
model(image, training=False)
dt = (time.time() - start_time) * 1000
print(f'img_per_batch = {img_per_batch}, {dt/n_iter:.2f} ms per iteration, {dt/n_iter/img_per_batch:.2f} ms per image')
Run Code Online (Sandbox Code Playgroud)
我的输出(Nvidia Jetson Xavier,tensorflow==2.0.0):
img_per_batch = 1, 21.74 ms per iteration, 21.74 ms per image
img_per_batch = 5, 42.35 ms per iteration, 8.47 ms per image
img_per_batch = 10, 68.37 ms per iteration, 6.84 ms per image
img_per_batch = 50, 312.83 ms per iteration, 6.26 ms per image
Run Code Online (Sandbox Code Playgroud)
然后我在每个全连接层之后添加 dropout 层:
model = tf.keras.models.Sequential([
# ... convolution layers are same
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(.3),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(.3),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(.3),
tf.keras.layers.Dense(240, activation='softmax')
])
Run Code Online (Sandbox Code Playgroud)
添加图层后,输出如下:
img_per_batch = 1, 31.18 ms per iteration, 31.18 ms per image
img_per_batch = 5, 76.15 ms per iteration, 15.23 ms per image
img_per_batch = 10, 127.91 ms per iteration, 12.79 ms per image
img_per_batch = 50, 513.85 ms per iteration, 10.28 ms per image
Run Code Online (Sandbox Code Playgroud)
理论上 dropout 层不应该影响推理性能。但是在上面的代码中,添加 dropout 层将单张图像的预测时间提高了 1.5 倍,而 10 张图像的批量预测几乎比没有 dropout 的要慢两倍。难道我做错了什么?
显然,这是 TensorFlow 2.0.0 中的一个已知问题:请参阅此 GitHub 评论。
尝试使用model.predict(x)而不是model(x).
这也可以通过更新到 TensorFlow 的更新版本(如 2.1.0)来解决。
希望这可以帮助
| 归档时间: |
|
| 查看次数: |
227 次 |
| 最近记录: |