Toybrick

加载onnx模型报错

奥古师弟

注册会员

积分
115
楼主
发表于 2020-6-16 17:15:19    查看: 5962|回复: 0 | [复制链接]    打印 | 只看该作者
原始的pytorch模型可以正常使用,转换为onnx模型之后,用onnx.checher.cherk_model(model)也能够正常加载模型。但是调用rknn.load_onnx(model)却报了尺寸错误。用rknn加载onnx模型的时候没有指定尺寸啊,为什么还报这个错误?
2020-06-16 17:47:49.569380: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1663] Cannot dlopen some GPU libraries. Skipping registering GPU devices...
2020-06-16 17:47:52.545700: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
(op_type:Transpose, name: Inferred shape and existing shape differ in dimension 3: (40) vs (20)
E Catch exception when loading onnx model: /home/hjy/home/huangjinyong/alg_proj/yolov5-master/models/test.onnx!
E Traceback (most recent call last):
E   File "rknn/api/rknn_base.py", line 510, in rknn.api.rknn_base.RKNNBase.load_onnx
E   File "rknn/base/RKNNlib/converter/convert_onnx.py", line 514, in rknn.base.RKNNlib.converter.convert_onnx.convert_onnx.__init__
E   File "rknn/base/RKNNlib/converter/convert_onnx.py", line 225, in rknn.base.RKNNlib.converter.convert_onnx.onnx_shape_infer_engine.infer_shape
E   File "/root/venv/lib/python3.6/site-packages/onnx/shape_inference.py", line 36, in infer_shapes
E     inferred_model_str = C.infer_shapes(model_str)
E RuntimeError: Inferred shape and existing shape differ in dimension 3: (40) vs (20)



回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

产品中心 购买渠道 开源社区 Wiki教程 资料下载 关于Toybrick


快速回复 返回顶部 返回列表