Mat*_*rtz 3 python tensorflow google-cloud-automl
我一直在使用 AutoML Vision Edge 执行一些图像分类任务,在以 TFLite 格式导出模型时取得了很好的效果。但是,我只是尝试导出saved_model.pb 文件并使用Tensorflow 2.0 运行它,似乎遇到了一些问题。
代码片段:
import numpy as np
import tensorflow as tf
import cv2
from tensorflow import keras
my_model = tf.keras.models.load_model('saved_model')
print(my_model)
print(my_model.summary())
Run Code Online (Sandbox Code Playgroud)
“saved_model”是包含我下载的saved_model.pb 文件的目录。这是我所看到的:
2019-10-18 23:29:08.801647: I tensorflow/core/platform/cpu_feature_guard.cc:142] 您的 CPU 支持该 TensorFlow 二进制文件未编译使用的指令:AVX2 FMA 2019-10-18 23:29:08.829010 : I tensorflow/compiler/xla/service/service.cc:168] XLA 服务 0x7ffc2d717510 在平台主机上执行计算。设备:2019-10-18 23:29:08.829038:I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor 设备(0):主机,默认版本回溯(最近一次调用):文件“classify_in_out_tf2. py", line 81, in print(my_model.summary()) AttributeError: 'AutoTrackable' object has no attribute 'summary'
我不确定这是否与我导出模型的方式或加载模型的代码有关,或者这些模型是否与 Tensorflow 2.0 或某些组合不兼容。
任何帮助将不胜感激!
sho*_*er3 10
我saved_model.pb在 docker 容器之外工作(用于对象检测,而不是分类 - 但它们应该相似,更改输出和输入tf 1.14),方法如下:
import cv2
import tensorflow as tf
cv2.imread(filepath)
flag, bts = cv.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ['serve'], 'directory_of_saved_model')
graph = tf.get_default_graph()
out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'encoded_image_string_tensor:0': inp})
Run Code Online (Sandbox Code Playgroud)
import cv2
import tensorflow as tf
import numpy as np
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ['serve'], 'directory_of_saved_model')
graph = tf.get_default_graph()
# Read and preprocess an image.
img = cv2.imread(filepath)
# Run the model
out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'map/TensorArrayStack/TensorArrayGatherV3:0': img[np.newaxis, :, :, :]})
Run Code Online (Sandbox Code Playgroud)
我使用 netron 来查找我的输入。
import cv2
import tensorflow as tf
img = cv2.imread('path_to_image_file')
flag, bts = cv2.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
loaded = tf.saved_model.load(export_dir='directory_of_saved_model')
infer = loaded.signatures["serving_default"]
out = infer(key=tf.constant('something_unique'), image_bytes=tf.constant(inp))
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
2551 次 |
| 最近记录: |