mat*_*att 5 python tensorflow tensor transfer-learning google-cloud-automl
我使用的是从 Google AutoML Vision 导出的分类模型,因此我只有一个saved_model.pb而没有变量、检查点等。我想将此模型图加载到本地 TensorFlow 安装中,将其用于推理并继续使用更多图片进行训练。
主要问题:
这个计划是否可行,即使用saved_model.pb没有变量、检查点等的单一计划并用新数据训练结果图?
如果是:您如何(?,)使用编码为字符串的图像获得输入形状?
理想情况下,展望未来:培训部分有什么需要考虑的重要事项吗?
关于代码的背景信息:
为了读取图像,我使用与使用 Docker 容器进行推理时相同的方法,因此使用 base64 编码的图像。
为了加载图表,我通过 CLI ( saved_model_cli show --dir input/model)检查了图表需要的标签集,即serve.
为了让我使用输入张量的名字graph.get_operations(),这让我Placeholder:0对image_bytes并Placeholder:1_0为关键(任意的字符串识别图像)。两者都有维度dim -1
import tensorflow as tf
import numpy as np
import base64
path_img = "input/testimage.jpg"
path_mdl = "input/model"
# input to network expected to be base64 encoded image
with io.open(path_img, 'rb') as image_file:
encoded_image = base64.b64encode(image_file.read()).decode('utf-8')
# reshaping to (1,) as the expecte dimension is (?,)
feed_dict_option1 = {
"Placeholder:0": { np.array(str(encoded_image)).reshape(1,) },
"Placeholder_1:0" : "image_key"
}
# reshaping to (1,1) as the expecte dimension is (?,)
feed_dict_option2 = {
"Placeholder:0": np.array(str(encoded_image)).reshape(1,1),
"Placeholder_1:0" : "image_key"
}
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ["serve"], path_mdl)
graph = tf.get_default_graph()
sess.run('scores:0',
feed_dict=feed_dict_option1)
sess.run('scores:0',
feed_dict=feed_dict_option2)
Run Code Online (Sandbox Code Playgroud)
输出:
# for input reshaped to (1,)
ValueError: Cannot feed value of shape (1,) for Tensor 'Placeholder:0', which has shape '(?,)'
# for input reshaped to (1,1)
ValueError: Cannot feed value of shape (1, 1) for Tensor 'Placeholder:0', which has shape '(?,)'
Run Code Online (Sandbox Code Playgroud)
你如何获得 的输入形状(?,)?
非常感谢。
是的!有可能,我有一个应该类似的对象检测模型,我可以在tensorflow 1.14.0中按如下方式运行它:
import cv2
cv2.imread(filepath)
flag, bts = cv.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'encoded_image_string_tensor:0': inp})
Run Code Online (Sandbox Code Playgroud)
我使用 netron 来查找我的输入。
在tensorflow 2.0中更容易:
import cv2
cv2.imread(filepath)
flag, bts = cv.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
saved_model_dir = '.'
loaded = tf.saved_model.load(export_dir=saved_model_dir)
infer = loaded.signatures["serving_default"]
out = infer(key=tf.constant('something_unique'), image_bytes=tf.constant(inp))
Run Code Online (Sandbox Code Playgroud)
也不saved_model.pb是 a frozen_inference_graph.pb,请参阅:What is Difference freeze_inference_graph.pb 和 saving_model.pb?