TensorRT(C++ API)未定义对“createNvOnnxParser_INTERNAL”的引用

Saj*_*hil 3 c++ pytorch tensorrt google-colaboratory

我正在尝试使用 TensorRT C++ API 从 ONNX 模型创建一个tensorrt 引擎。我已经根据文档编写了代码来读取、序列化张量引擎并将其写入磁盘。我已经使用debian 安装说明在 colab 上安装了tensorrt7 。

这是我正在编译的 C++ 代码g++ rnxt.cpp -o rnxt

#include <cuda_runtime_api.h>
#include <NvOnnxParser.h>
#include <NvInfer.h>

#include <cstdlib>
#include <fstream>
#include <iostream>
#include <sstream>
#include <iterator>
#include <algorithm>

class Logger : public nvinfer1::ILogger           
 {
     void log(Severity severity, const char* msg) override
     {
         // suppress info-level messages
         if (severity != Severity::kINFO)
             std::cout << msg << std::endl;
     }
 } gLogger;


int main(){

    int maxBatchSize = 32;

    nvinfer1::IBuilder* builder = nvinfer1::createInferBuilder(gLogger);
    const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);  
    nvinfer1::INetworkDefinition* network = builder->createNetworkV2(explicitBatch);

    nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);
    
    
    parser->parseFromFile("saved_resnext.onnx", 1);
    for (int i = 0; i < parser->getNbErrors(); ++i)
    {
        std::cout << parser->getError(i)->desc() << std::endl;
    }

    builder->setMaxBatchSize(maxBatchSize);
    nvinfer1::IBuilderConfig* config = builder->createBuilderConfig();
    config->setMaxWorkspaceSize(1 << 20);
    nvinfer1::ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);

    parser->destroy();
    network->destroy();
    config->destroy();
    builder->destroy();

    nvinfer1::IHostMemory *serializedModel = engine->serialize();

    std::ofstream engine_file("saved_resnext.engine");

    engine_file.write((const char*)serializedModel->data(),serializedModel->size());

    serializedModel->destroy();
    return 0;
 }
Run Code Online (Sandbox Code Playgroud)

编译时,我收到以下错误:

/tmp/ccJaGxCX.o: In function `nvinfer1::(anonymous namespace)::createInferBuilder(nvinfer1::ILogger&)':
rnxt.cpp:(.text+0x19): undefined reference to `createInferBuilder_INTERNAL'
/tmp/ccJaGxCX.o: In function `nvonnxparser::(anonymous namespace)::createParser(nvinfer1::INetworkDefinition&, nvinfer1::ILogger&)':
rnxt.cpp:(.text+0x43): undefined reference to `createNvOnnxParser_INTERNAL'
collect2: error: ld returned 1 exit status
Run Code Online (Sandbox Code Playgroud)

我还收到与此相关的错误<cuda_runtime_api.h>,因此我已将这些文件从 cuda 的包含目录 ( ) 添加(粘贴/usr/local/cuda-11.0/targets/x86_64-linux/include)到/usr/include directory之后我收到上述错误。我对 C++ 没有太多经验,任何帮助将不胜感激。

编辑:我还使用安装了 libnvinfer

!apt-get install -y libnvinfer7=7.1.3-1+cuda11.0
!apt-get install -y libnvinfer-dev=7.1.3-1+cuda11.0
Run Code Online (Sandbox Code Playgroud)

小智 5

此问题是由于 Makefile 中未链接 nvonnxparser.so 造成的。只需添加

target_link_libraries(${TARGET_NAME} nvonnxparser)
Run Code Online (Sandbox Code Playgroud)

在你的 CMake 中。