|
原始的pytorch模型可以正常使用,转换为onnx模型之后,用onnx.checher.cherk_model(model)也能够正常加载模型。但是调用rknn.load_onnx(model)却报了尺寸错误。用rknn加载onnx模型的时候没有指定尺寸啊,为什么还报这个错误?
2020-06-16 17:47:49.569380: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1663] Cannot dlopen some GPU libraries. Skipping registering GPU devices...
2020-06-16 17:47:52.545700: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
(op_type:Transpose, name: Inferred shape and existing shape differ in dimension 3: (40) vs (20)
E Catch exception when loading onnx model: /home/hjy/home/huangjinyong/alg_proj/yolov5-master/models/test.onnx!
E Traceback (most recent call last):
E File "rknn/api/rknn_base.py", line 510, in rknn.api.rknn_base.RKNNBase.load_onnx
E File "rknn/base/RKNNlib/converter/convert_onnx.py", line 514, in rknn.base.RKNNlib.converter.convert_onnx.convert_onnx.__init__
E File "rknn/base/RKNNlib/converter/convert_onnx.py", line 225, in rknn.base.RKNNlib.converter.convert_onnx.onnx_shape_infer_engine.infer_shape
E File "/root/venv/lib/python3.6/site-packages/onnx/shape_inference.py", line 36, in infer_shapes
E inferred_model_str = C.infer_shapes(model_str)
E RuntimeError: Inferred shape and existing shape differ in dimension 3: (40) vs (20)
|
|