[编辑相关性]
使用此配置编译此模型后:
converter = tf.lite.TFLiteConverter.from_frozen_graph(
path/to/graph.pb,
input_arrays=["Cast"],
output_arrays=["features"],
input_shapes={"Cast":[1, 128, 64, 3]})
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
converter.allow_custom_ops = True
converter.experimental_new_converter = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
Run Code Online (Sandbox Code Playgroud)
tflite生成了模型,但调用解释allocate_tensors器会导致此错误:
RuntimeError: Regular TensorFlow ops are not supported by this interpreter.
Make sure you apply/link the Flex delegate before
inference.Node number 0 (FlexTensorArrayV3) failed to prepare.
Run Code Online (Sandbox Code Playgroud)
所以看来我已经构建了tflite_convert对张量流操作的支持......
| 归档时间: |
|
| 查看次数: |
1062 次 |
| 最近记录: |