主要思想是将TFRecords转换为numpy数组.假设TFRecord存储图像.特别:
Run Code Online (Sandbox Code Playgroud)1.jpg 2 2.jpg 4 3.jpg 5
我目前使用以下代码:
import tensorflow as tf
import os
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
'height': tf.FixedLenFeature([], tf.int64),
'width': tf.FixedLenFeature([], tf.int64),
'depth': tf.FixedLenFeature([], tf.int64)
})
image = tf.decode_raw(features['image_raw'], tf.uint8)
label = tf.cast(features['label'], tf.int32)
height = tf.cast(features['height'], tf.int32)
width = tf.cast(features['width'], tf.int32)
depth = tf.cast(features['depth'], tf.int32)
return image, label, …Run Code Online (Sandbox Code Playgroud) 我正在使用Keras和Tensorflow实施强化学习模型。我必须在单个输入上频繁调用model.predict()。
在对简单的预训练模型进行推理测试时,我注意到使用Keras的model.predict比仅对存储的权重使用Numpy慢。为什么这么慢,我如何加速呢?对于复杂模型,使用纯Numpy不可行。
import timeit
import numpy as np
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense
w = np.array([[-1., 1., 0., 0.], [0., 0., -1., 1.]]).T
b = np.array([ 15., -15., -21., 21.])
model = Sequential()
model.add(Dense(4, input_dim=2, activation='linear'))
model.layers[0].set_weights([w.T, b])
model.compile(loss='mse', optimizer='adam')
state = np.array([-23.5, 17.8])
def predict_very_slow():
return model.predict(state[np.newaxis])[0]
def predict_slow():
ws = model.layers[0].get_weights()
return np.matmul(ws[0].T, state) + ws[1]
def predict_fast():
return np.matmul(w, state) + b
print(
timeit.timeit(predict_very_slow, number=10000),
timeit.timeit(predict_slow, number=10000),
timeit.timeit(predict_fast, number=10000)
)
# 5.168972805004538 1.6963867129435828 0.021918574168087623 …Run Code Online (Sandbox Code Playgroud) 我正在尝试将我们的输入管道移动到 tensorflow 数据集 api。为此,我们已将图像和标签转换为 tfrecords。然后我们通过dataset api读取tfrecords,比较初始数据和读取的数据是否相同。到现在为止还挺好。下面是将 tfrecords 读入数据集的代码
def _parse_function2(proto):
# define your tfrecord again. Remember that you saved your image as a string.
keys_to_features = {"im_path": tf.FixedLenSequenceFeature([], tf.string, allow_missing=True),
"im_shape": tf.FixedLenSequenceFeature([], tf.int64, allow_missing=True),
"score_shape": tf.FixedLenSequenceFeature([], tf.int64, allow_missing=True),
"geo_shape": tf.FixedLenSequenceFeature([], tf.int64, allow_missing=True),
"im_patches": tf.FixedLenSequenceFeature([], tf.string, allow_missing=True),
"score_patches": tf.FixedLenSequenceFeature([], tf.string, allow_missing=True),
"geo_patches": tf.FixedLenSequenceFeature([], tf.string, allow_missing=True),
}
# Load one example
parsed_features = tf.parse_single_example(serialized=proto, features=keys_to_features)
parsed_features['im_patches'] = parsed_features['im_patches'][0]
parsed_features['score_patches'] = parsed_features['score_patches'][0]
parsed_features['geo_patches'] = parsed_features['geo_patches'][0]
parsed_features['im_patches'] = tf.decode_raw(parsed_features['im_patches'], tf.uint8)
parsed_features['im_patches'] = tf.reshape(parsed_features['im_patches'], parsed_features['im_shape']) …Run Code Online (Sandbox Code Playgroud)