mrg*_*oom 10 quantization deep-learning keras tensorflow
是否可以将tf.contrib.quantize.create_training_graph
已训练的Keras模型用于模型量化?
据我了解,我可以tf.Graph
从Keras模型导入,但是我可以在用修改后对其进行微调tf.contrib.quantize.create_training_graph
吗?
我能够tf.contrib.quantize.create_training_graph(input_graph=K.get_session().graph, quant_delay=int(0))
在模型定义和模型加载之后调用,但是得到:
2019-02-22 14:56:24.216742: W tensorflow/c/c_api.cc:686] Operation '{name:'global_average_pooling2d_1_1/Mean' id:3777 op device:{} def:{global_average_pooling2d_1_1/Mean = Mean[T=DT_FLOAT, Tidx=DT_INT32, keep_dims=false](conv2d_15_1/act_quant/FakeQuantWithMinMaxVars:0, global_average_pooling2d_1_1/Mean/reduction_indices)}}' was changed by updating input tensor after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
Run Code Online (Sandbox Code Playgroud)
转换为keras-> tensorflow-> tflite时,至少我能够使用uint8权重保存模型,因为我了解到模型和推理的输入仍然是fp32。
converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph(
graph_def_file='tf_model.pb',
input_arrays=input_node_names,
output_arrays=output_node_names)
converter.post_training_quantize = True
tflite_model = converter.convert()
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
528 次 |
最近记录: |