tflite 转换器错误操作不支持

Mid*_*ang 2 adb tensorflow tensorflow-lite bert-language-model

我试图将 albert 的 .pb 模型转换为 tflite

我在 tf 1.15 中使用https://github.com/google-research/albert制作了 .pb 模型

我曾经 tconverter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory 制作过 tflite 文件(在 tf 2.4.1 中)

Traceback (most recent call last):
  File "convert.py", line 7, in <module>
    tflite_model = converter.convert()
  File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 983, in convert
    **converter_kwargs)
  File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py", line 449, in toco_convert_impl
    enable_mlir_converter=enable_mlir_converter)
  File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py", line 200, in toco_convert_protos
    raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: See console for info.
2021-04-25 17:30:33.543663: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ParseExample
2021-04-25 17:30:33.546255: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 163 operators, 308 arrays (0 quantized)
2021-04-25 17:30:33.547201: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 162 operators, 301 arrays (0 quantized)
2021-04-25 17:30:33.548519: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 162 operators, 301 arrays (0 quantized)
2021-04-25 17:30:33.550930: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 134 operators, 264 arrays (0 quantized)
2021-04-25 17:30:33.577037: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 127 operators, 257 arrays (0 quantized)
2021-04-25 17:30:33.578278: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 127 operators, 257 arrays (0 quantized)
2021-04-25 17:30:33.579051: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 127 operators, 257 arrays (0 quantized)
2021-04-25 17:30:33.580196: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 0 bytes, theoretical optimal value: 0 bytes.
2021-04-25 17:30:33.580514: I tensorflow/lite/toco/toco_tooling.cc:454] Number of parameters: 11640702
2021-04-25 17:30:33.580862: E tensorflow/lite/toco/toco_tooling.cc:481] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, ARG_MAX, CAST, EXPAND_DIMS, FILL, FULLY_CONNECTED, GATHER, MEAN, MUL, PACK, POW, RESHAPE, RSQRT, SHAPE, SOFTMAX, SQUARED_DIFFERENCE, SQUEEZE, STRIDED_SLICE, SUB, TANH, TRANSPOSE. Here is a list of operators for which you will need custom implementations: BatchMatMul, ParseExample.
Traceback (most recent call last):
  File "/home/pgb/anaconda3/envs/test2/bin/toco_from_protos", line 8, in <module>
    sys.exit(main())
  File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 89, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 52, in execute
    enable_mlir_converter)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, ARG_MAX, CAST, EXPAND_DIMS, FILL, FULLY_CONNECTED, GATHER, MEAN, MUL, PACK, POW, RESHAPE, RSQRT, SHAPE, SOFTMAX, SQUARED_DIFFERENCE, SQUEEZE, STRIDED_SLICE, SUB, TANH, TRANSPOSE. Here is a list of operators for which you will need custom implementations: BatchMatMul, ParseExample.
Run Code Online (Sandbox Code Playgroud)

所以我用了

converter.allow_custom_ops = True
Run Code Online (Sandbox Code Playgroud)

它有效,但是当我尝试使用方法https://www.tensorflow.org/lite/performance/measurement测量 Android 设备中的运行时时

没有任何结果(CPU 进入 Idel)。

  1. 在albert github代码中我找不到BatchMatMul,ParseExample它来自哪里?

  2. 除了converter.allow_custom_ops = True之外还有什么办法吗?

  3. adb中运行模型失败的原因可能是converter.allow_custom_ops = True吗?

小智 6

请考虑使用“选择 TF”选项,以便在 TFLite 内置操作覆盖范围不适合您的情况时退回到 TF 操作。

对于转换过程,您可以启用“选择 TF”选项,如下所示:

converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
Run Code Online (Sandbox Code Playgroud)

允许自定义操作需要用户写下 TFLite 内置操作集未涵盖的操作的 TFLite 自定义操作。例如,BatchMatMul和ParseExample操作需要自己实现。在大多数情况下,使用现有的 TF 操作实现比实现自定义操作要容易得多。

请参考此链接