无法将 PyTorch 模型导出到 ONNX

Dar*_*oob 5 pytorch google-colaboratory onnx

我正在尝试将预训练的火炬模型转换为 ONNX,但收到以下错误:

RuntimeError: step!=1 is currently not supported
Run Code Online (Sandbox Code Playgroud)

我正在预训练的着色模型上尝试此操作:https://github.com/richzhang/colorization

这是我在 Google Colab 中运行的代码:

!git clone https://github.com/richzhang/colorization.git
cd colorization/
import colorizers
model = colorizer_siggraph17 = colorizers.siggraph17(pretrained=True).eval()
input_names = [ "input" ]
output_names = [ "output" ]
dummy_input = torch.randn(1, 1, 256, 256, device='cpu')
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,
                  input_names=input_names, output_names=output_names)
Run Code Online (Sandbox Code Playgroud)

我很感激任何帮助:)

更新 1: @Proko 建议解决了 ONNX 导出问题。现在,当我尝试将 ONNX 转换为 TensorRT 时,我遇到了一个可能相关的新问题。我收到以下错误:

[TensorRT] ERROR: Network must have at least one output
Run Code Online (Sandbox Code Playgroud)

这是我使用的代码:

import torch
import pycuda.driver as cuda
import pycuda.autoinit
import tensorrt as trt
import onnx

TRT_LOGGER = trt.Logger()

def build_engine(onnx_file_path):
    # initialize TensorRT engine and parse ONNX model
    builder = trt.Builder(TRT_LOGGER)
    builder.max_workspace_size = 1 << 25
    builder.max_batch_size = 1
    if builder.platform_has_fast_fp16:
        builder.fp16_mode = True

    network = builder.create_network()
    parser = trt.OnnxParser(network, TRT_LOGGER)
    
    # parse ONNX
    with open(onnx_file_path, 'rb') as model:
        print('Beginning ONNX file parsing')
        parser.parse(model.read())
    print('Completed parsing of ONNX file')

    # generate TensorRT engine optimized for the target platform
    print('Building an engine...')
    engine = builder.build_cuda_engine(network)
    context = engine.create_execution_context()
    print("Completed creating Engine")

    return engine, context

ONNX_FILE_PATH = 'siggraph17.onnx' # Exported using the code above
engine,_ = build_engine(ONNX_FILE_PATH)
Run Code Online (Sandbox Code Playgroud)

我尝试通过以下方式强制 build_engine 函数使用网络的输出:

network.mark_output(network.get_layer(network.num_layers-1).get_output(0))
Run Code Online (Sandbox Code Playgroud)

但它不起作用。我愿意提供任何帮助!

Pro*_*oko 3

正如我在评论中提到的,这是因为 torch.onnx仅支持切片step = 1,但模型中有两步切片:

self.model2(conv1_2[:,:,::2,::2])
Run Code Online (Sandbox Code Playgroud)

目前您唯一的选择是将切片重写为其他操作。您可以通过使用 range 和 reshape 来获得正确的索引。考虑以下函数“step-less-arange”(我希望它对于任何有类似问题的人来说足够通用):

def sla(x, step):
    diff = x % step
    x += (diff > 0)*(step - diff) # add length to be able to reshape properly
    return torch.arange(x).reshape((-1, step))[:, 0]
Run Code Online (Sandbox Code Playgroud)

用法:

>> sla(11, 3)
tensor([0, 3, 6, 9])
Run Code Online (Sandbox Code Playgroud)

现在您可以像这样替换每个切片:

conv2_2 = self.model2(conv1_2[:,:,self.sla(conv1_2.shape[2], 2),:][:,:,:, self.sla(conv1_2.shape[3], 2)])
Run Code Online (Sandbox Code Playgroud)

注意:你应该优化它。每次调用都会计算索引,因此预先计算它可能是明智之举。

我已经用我的存储库分支对其进行了测试,并且能够保存模型:

https://github.com/prokotg/colorization