Sol*_*lix 40 python keras tensorflow
我有一个新的数据集微调初始模型,并在Keras中将其保存为".h5"模型.现在我的目标是在android Tensorflow上运行我的模型,它只接受".pb"扩展名.问题是Keras或tensorflow中是否有任何库进行此转换?到目前为止我看过这篇文章:https: //blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html但还不清楚.
jde*_*esa 65
Keras本身不包括将TensorFlow图导出为协议缓冲区文件的任何方法,但您可以使用常规TensorFlow实用程序来完成.这是一篇博客文章,解释了如何使用freeze_graph.pyTensorFlow中包含的实用程序脚本来完成它,这是它的"典型"方式.
但是,我个人觉得必须制作检查点,然后运行外部脚本来获取模型,而不是喜欢从我自己的Python代码中执行它,所以我使用这样的函数:
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
"""
Freezes the state of a session into a pruned computation graph.
Creates a new computation graph where variable nodes are replaced by
constants taking their current value in the session. The new graph will be
pruned so subgraphs that are not necessary to compute the requested
outputs are removed.
@param session The TensorFlow session to be frozen.
@param keep_var_names A list of variable names that should not be frozen,
or None to freeze all the variables in the graph.
@param output_names Names of the relevant graph outputs.
@param clear_devices Remove the device directives from the graph for better portability.
@return The frozen graph definition.
"""
graph = session.graph
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
output_names += [v.op.name for v in tf.global_variables()]
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ""
frozen_graph = tf.graph_util.convert_variables_to_constants(
session, input_graph_def, output_names, freeze_var_names)
return frozen_graph
Run Code Online (Sandbox Code Playgroud)
这在实施中受到启发freeze_graph.py.参数也类似于脚本.session是TensorFlow会话对象.keep_var_names只有当你想保留一些未冻结的变量时才需要(例如有状态模型),所以通常不需要.output_names是一个列表,其中包含生成所需输出的操作的名称.clear_devices只需删除任何设备指令,使图表更具可移植性.因此,对于model具有一个输出的典型Keras ,您可以执行以下操作:
from keras import backend as K
# Create, compile and train model...
frozen_graph = freeze_session(K.get_session(),
output_names=[out.op.name for out in model.outputs])
Run Code Online (Sandbox Code Playgroud)
然后您可以像往常一样将图形写入文件tf.train.write_graph:
tf.train.write_graph(frozen_graph, "some_directory", "my_model.pb", as_text=False)
Run Code Online (Sandbox Code Playgroud)
Jef*_*ang 24
freeze_session方法工作正常.但是与保存到检查点文件相比,使用TensorFlow附带的freeze_graph工具对我来说似乎更简单,因为它更容易维护.您需要做的就是以下两个步骤:
首先,在您的Keras代码后添加model.fit(...)并训练您的模型:
from keras import backend as K
import tensorflow as tf
print(model.output.op.name)
saver = tf.train.Saver()
saver.save(K.get_session(), '/tmp/keras_model.ckpt')
Run Code Online (Sandbox Code Playgroud)
然后cd到你的TensorFlow根目录,运行:
python tensorflow/python/tools/freeze_graph.py \
--input_meta_graph=/tmp/keras_model.ckpt.meta \
--input_checkpoint=/tmp/keras_model.ckpt \
--output_graph=/tmp/keras_frozen.pb \
--output_node_names="<output_node_name_printed_in_step_1>" \
--input_binary=true
Run Code Online (Sandbox Code Playgroud)
以下简单示例(XOR示例)展示了如何导出Keras模型(采用h5格式和pb格式),以及如何在Python和C ++中使用模型:
train.py:
import numpy as np
import tensorflow as tf
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
"""
Freezes the state of a session into a pruned computation graph.
Creates a new computation graph where variable nodes are replaced by
constants taking their current value in the session. The new graph will be
pruned so subgraphs that are not necessary to compute the requested
outputs are removed.
@param session The TensorFlow session to be frozen.
@param keep_var_names A list of variable names that should not be frozen,
or None to freeze all the variables in the graph.
@param output_names Names of the relevant graph outputs.
@param clear_devices Remove the device directives from the graph for better portability.
@return The frozen graph definition.
"""
graph = session.graph
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
output_names += [v.op.name for v in tf.global_variables()]
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ''
frozen_graph = tf.graph_util.convert_variables_to_constants(
session, input_graph_def, output_names, freeze_var_names)
return frozen_graph
X = np.array([[0,0], [0,1], [1,0], [1,1]], 'float32')
Y = np.array([[0], [1], [1], [0]], 'float32')
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(64, input_dim=2, activation='relu'))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['binary_accuracy'])
model.fit(X, Y, batch_size=1, nb_epoch=100, verbose=0)
# inputs: ['dense_input']
print('inputs: ', [input.op.name for input in model.inputs])
# outputs: ['dense_4/Sigmoid']
print('outputs: ', [output.op.name for output in model.outputs])
model.save('./xor.h5')
frozen_graph = freeze_session(tf.keras.backend.get_session(), output_names=[out.op.name for out in model.outputs])
tf.train.write_graph(frozen_graph, './', 'xor.pbtxt', as_text=True)
tf.train.write_graph(frozen_graph, './', 'xor.pb', as_text=False)
Run Code Online (Sandbox Code Playgroud)
预测:
import numpy as np
import tensorflow as tf
model = tf.keras.models.load_model('./xor.h5')
# 0 ^ 0 = [[0.01974997]]
print('0 ^ 0 = ', model.predict(np.array([[0, 0]])))
# 0 ^ 1 = [[0.99141496]]
print('0 ^ 1 = ', model.predict(np.array([[0, 1]])))
# 1 ^ 0 = [[0.9897714]]
print('1 ^ 0 = ', model.predict(np.array([[1, 0]])))
# 1 ^ 1 = [[0.00406971]]
print('1 ^ 1 = ', model.predict(np.array([[1, 1]])))
Run Code Online (Sandbox Code Playgroud)
opencv-predict.py:
import numpy as np
import cv2 as cv
model = cv.dnn.readNetFromTensorflow('./xor.pb')
# 0 ^ 0 = [[0.01974997]]
model.setInput(np.array([[0, 0]]), name='dense_input')
print('0 ^ 0 = ', model.forward(outputName='dense_4/Sigmoid'))
# 0 ^ 1 = [[0.99141496]]
model.setInput(np.array([[0, 1]]), name='dense_input')
print('0 ^ 1 = ', model.forward(outputName='dense_4/Sigmoid'))
# 1 ^ 0 = [[0.9897714]]
model.setInput(np.array([[1, 0]]), name='dense_input')
print('1 ^ 0 = ', model.forward(outputName='dense_4/Sigmoid'))
# 1 ^ 1 = [[0.00406971]]
model.setInput(np.array([[1, 1]]), name='dense_input')
print('1 ^ 1 = ', model.forward(outputName='dense_4/Sigmoid'))
Run Code Online (Sandbox Code Playgroud)
预报.cpp:
#include <cstdlib>
#include <iostream>
#include <opencv2/opencv.hpp>
int main(int argc, char **argv)
{
cv::dnn::Net net;
net = cv::dnn::readNetFromTensorflow("./xor.pb");
// 0 ^ 0 = [0.018541215]
float x0[] = { 0, 0 };
net.setInput(cv::Mat(1, 2, CV_32F, x0), "dense_input");
std::cout << "0 ^ 0 = " << net.forward("dense_4/Sigmoid") << std::endl;
// 0 ^ 1 = [0.98295897]
float x1[] = { 0, 1 };
net.setInput(cv::Mat(1, 2, CV_32F, x1), "dense_input");
std::cout << "0 ^ 1 = " << net.forward("dense_4/Sigmoid") << std::endl;
// 1 ^ 0 = [0.98810625]
float x2[] = { 1, 0 };
net.setInput(cv::Mat(1, 2, CV_32F, x2), "dense_input");
std::cout << "1 ^ 0 = " << net.forward("dense_4/Sigmoid") << std::endl;
// 1 ^ 1 = [0.010002014]
float x3[] = { 1, 1 };
net.setInput(cv::Mat(1, 2, CV_32F, x3), "dense_input");
std::cout << "1 ^ 1 = " << net.forward("dense_4/Sigmoid") << std::endl;
return EXIT_SUCCESS;
}
Run Code Online (Sandbox Code Playgroud)
目前,以上所有较旧的答案都已过时。从 Tensorflow 2.1 开始
from tensorflow.keras.models import Model, load_model
model = load_model(MODEL_FULLPATH)
model.save(MODEL_FULLPATH_MINUS_EXTENSION)
Run Code Online (Sandbox Code Playgroud)
将创建一个包含“saved_model.pb”的文件夹
当您想转换为 tensorflow 时,有一个非常重要的点。如果您使用 dropout、batch normalization 或任何其他类似的层(它们不可训练但计算值),您应该更改 keras backend 的学习阶段。这是关于它的讨论。
import keras.backend as K
k.set_learning_phase(0) # 0 testing, 1 training mode
Run Code Online (Sandbox Code Playgroud)
小智 7
这个解决方案对我有用。感谢https://medium.com/tensorflow/training-and-serving-ml-models-with-tf-keras-fd975cc0fa27
import tensorflow as tf
# The export path contains the name and the version of the model
tf.keras.backend.set_learning_phase(0) # Ignore dropout at inference
model = tf.keras.models.load_model('./model.h5')
export_path = './PlanetModel/1'
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors
# And stored with the default serving key
with tf.keras.backend.get_session() as sess:
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input_image': model.input},
outputs={t.name:t for t in model.outputs})
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
48335 次 |
| 最近记录: |