|
问题描述:之前用rknn toolkit成功转换过facenet管方模型20180402-114759.pb,并在RV1126上运行正常。现在由于要使用rk3588,所以尝试用rknn toolkit2对模型转换,结果报错了,修改节点也未能转换成功,求各位大佬解惑!!!
转换代码:
from rknn.api import RKNN
INPUT_SIZE = 160
if __name__ == '__main__':
# Create RKNN object
rknn = RKNN(verbose=True)
# Config for Model Input PreProcess
rknn.config(mean_values=[[0,0,0]], std_values=[[255,255,255]], target_platform='rk3588')
print('config done')
#load tensorflow model
print('--> Loading model')
rknn.load_tensorflow(tf_pb='./20180402-114759.pb',
inputs=['input'],
outputs=['InceptionResnetV1/Bottleneck/BatchNorm/Reshape_1'],
input_size_list=[[1,INPUT_SIZE, INPUT_SIZE, 3]])
print('done')
# Build Model
print('--> Building model')
rknn.build(do_quantization=False)
print('done')
# Export RKNN Model
print('--> Export rknn model')
rknn.export_rknn('./facenet.rknn')
rknn.release()
报错信息:
/home/htzc/anaconda3/envs/rk212/bin/python /home/htzc/Desktop/rk3588face/pb2rknn/pb2rknn.py
W __init__: rknn-toolkit2 version: 1.2.0-f7bb160f
config done
--> Loading model
W load_tensorflow: inputs name should be a tensor name instead of node name
2022-05-26 15:32:52.985302: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying strip_unused_nodes
2022-05-26 15:32:53.091163: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying sort_by_execution_order
2022-05-26 15:32:53.139597: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying fold_constants
2022-05-26 15:32:53.333489: E tensorflow/tools/graph_transforms/transform_graph.cc:332] fold_constants: Ignoring error You must feed a value for placeholder tensor 'phase_train' with dtype bool
[[{{node phase_train}}]]
2022-05-26 15:32:53.368819: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying fold_batch_norms
2022-05-26 15:32:53.471022: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying fold_old_batch_norms
2022-05-26 15:32:55.117274: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2022-05-26 15:32:55.137068: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499950000 Hz
2022-05-26 15:32:55.137324: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x563f45162bc0 executing computations on platform Host. Devices:
2022-05-26 15:32:55.137342: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
W:tensorflow:From /home/htzc/anaconda3/envs/rk212/lib/python3.6/site-packages/rknn/api/rknn.py:68: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
W:tensorflow:From /home/htzc/anaconda3/envs/rk212/lib/python3.6/site-packages/tensorflow/python/framework/graph_util_impl.py:270: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
2022-05-26 15:32:56.756005: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying fold_constants
2022-05-26 15:32:56.991171: E tensorflow/tools/graph_transforms/transform_graph.cc:332] fold_constants: Ignoring error You must feed a value for placeholder tensor 'phase_train' with dtype bool
[[{{node phase_train}}]]
2022-05-26 15:32:57.032718: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying remove_attribute
2022-05-26 15:32:57.078643: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying fold_batch_norms
2022-05-26 15:32:57.179528: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying fold_old_batch_norms
Shape of placeholder 'phase_train' is unknown, treated it as a scalar. Please use the --inputs flag and append the shape to the input name if this input is not a scalar.
2022-05-26 15:33:01.744908: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Scaling constant 0.400000 for dropout pattern rooted at InceptionResnetV1/Logits/Dropout/cond/dropout/mul is inconsistent with dropout ratio 0.400000. The pattern will not be replaced with an ONNX dropout node.
Scaling constant 0.400000 for dropout pattern rooted at InceptionResnetV1/Logits/Dropout/cond/dropout/mul is inconsistent with dropout ratio 0.400000. The pattern will not be replaced with an ONNX dropout node.
E load_tensorflow: Catch exception when loading tensorflow model: ./20180402-114759.pb!
E load_tensorflow: Traceback (most recent call last):
E load_tensorflow: File "rknn/api/rknn_base.py", line 1015, in rknn.api.rknn_base.RKNNBase.load_tensorflow
E load_tensorflow: File "rknn/api/rknn_base.py", line 584, in rknn.api.rknn_base.RKNNBase._create_ir_and_inputs_meta
E load_tensorflow: File "rknn/api/ir_graph.py", line 39, in rknn.api.ir_graph.IRGraph.__init__
E load_tensorflow: File "rknn/api/ir_graph.py", line 180, in rknn.api.ir_graph.IRGraph.rebuild
E load_tensorflow: File "rknn/api/ir_graph.py", line 140, in rknn.api.ir_graph.IRGraph._clean_model
E load_tensorflow: File "rknn/api/ir_graph.py", line 59, in rknn.api.ir_graph.IRGraph.infer_shapes
E load_tensorflow: File "/home/htzc/anaconda3/envs/rk212/lib/python3.6/site-packages/onnx/checker.py", line 93, in check_model
E load_tensorflow: C.check_model(model.SerializeToString())
E load_tensorflow: onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'InceptionResnetV1/Repeat/block35_1/Branch_1/Conv2d_0b_3x3/BatchNorm/Const:0' of node:
E load_tensorflow: input: "InceptionResnetV1/Conv2d_1a_3x3/Conv2D:0" input: "InceptionResnetV1/Repeat/block35_1/Branch_1/Conv2d_0b_3x3/BatchNorm/Const:0" input: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/beta/read__494" input: "Squeeze__3874:0" input: "Div__3882:0" output: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm:0" name: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm" op_type: "BatchNormalization" attribute { name: "epsilon" f: 0.001 type: FLOAT } domain: ""
E load_tensorflow: is not output of any previous nodes.
E load_tensorflow: ==> Context: Bad node spec: input: "phase_train:0" output: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/Merge:0" name: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond_If__520" op_type: "If" attribute { name: "then_branch" g { node { input: "InceptionResnetV1/Conv2d_1a_3x3/Conv2D:0" output: "Shape__3877:0" name: "Shape__3877" op_type: "Shape" domain: "" } node { input: "InceptionResnetV1/Conv2d_1a_3x3/Conv2D:0" output: "ReduceMean__3873:0" name: "ReduceMean__3873" op_type: "ReduceMean" attribute { name: "axes" ints: 0 ints: 2 ints: 3 type: INTS } attribute { name: "keepdims" i: 1 type: INT } domain: "" } node { input: "InceptionResnetV1/Conv2d_1a_3x3/Conv2D:0" input: "ReduceMean__3873:0" output: "Sub__3875:0" name: "Sub__3875" op_type: "Sub" domain: "" } node { input: "Sub__3875:0" output: "ReduceSumSquare__3876:0" name: "ReduceSumSquare__3876" op_type: "ReduceSumSquare" attribute { name: "axes" ints: 0 ints: 2 ints: 3 type: INTS } attribute { name: "keepdims" i: 0 type: INT } domain: "" } node { input: "ReduceMean__3873:0" output: "Squeeze__3874:0" name: "Squeeze__3874" op_type: "Squeeze" attribute { name: "axes" ints: 0 ints: 2 ints: 3 type: INTS } domain: "" } node { input: "Shape__3877:0" input: "axes_const__3878" output: "Gather__3879:0" name: "Gather__3879" op_type: "Gather" domain: "" } node { input: "Gather__3879:0" output: "ReduceProd__3880:0" name: "ReduceProd__3880" op_type: "ReduceProd" attribute { name: "axes" ints: 0 type: INTS } attribute { name: "keepdims" i: 0 type: INT } domain: "" } node { input: "ReduceProd__3880:0" output: "Cast__3881:0" name: "Cast__3881" op_type: "Cast" attribute { name: "to" i: 1 type: INT } domain: "" } node { input: "ReduceSumSquare__3876:0" input: "Cast__3881:0" output: "Div__3882:0" name: "Div__3882" op_type: "Div" domain: "" } node { input: "InceptionResnetV1/Conv2d_1a_3x3/Conv2D:0" input: "InceptionResnetV1/Repeat/block35_1/Branch_1/Conv2d_0b_3x3/BatchNorm/Const:0" input: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/beta/read__494" input: "Squeeze__3874:0" input: "Div__3882:0" output: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm:0" name: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm" op_type: "BatchNormalization" attribute { name: "epsilon" f: 0.001 type: FLOAT } domain: "" } node { input: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm:0" output: "sub_graph_ending_node_Identity__517:0" name: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm__3872" op_type: "Transpose" attribute { name: "perm" ints: 0 ints: 2 ints: 3 ints: 1 type: INTS } } name: "tf2onnx__516" initializer { dims: 3 data_type: 7 name: "axes_const__3878" raw_data: "\000\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\003\000\000\000\000\000\000\000" } doc_string: "graph for InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond_If__520 then_branch" output { name: "sub_graph_ending_node_Identity__517:0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 79 } dim { dim_value: 79 } dim { dim_value: 32 } } } } } value_info { name: "Shape__3877:0" type { tensor_type { elem_type: 7 shape { dim { dim_value: 4 } } } } } value_info { name: "ReduceMean__3873:0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 32 } dim { dim_value: 1 } dim { dim_value: 1 } } } } } value_info { name: "Sub__3875:0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 32 } dim { dim_value: 79 } dim { dim_value: 79 } } } } } value_info { name: "ReduceSumSquare__3876:0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 32 } } } } } value_info { name: "Squeeze__3874:0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 32 } } } } } value_info { name: "Gather__3879:0" type { tensor_type { elem_type: 7 } } } value_info { name: "ReduceProd__3880:0" type { tensor_type { elem_type: 7 } } } value_info { name: "Cast__3881:0" type { tensor_type { elem_type: 1 } } } value_info { name: "Div__3882:0" type { tensor_type { elem_type: 1 } } } value_info { name: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm:0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 32 } dim { dim_value: 79 } dim { dim_value: 79 } } } } } } type: GRAPH } attribute { name: "else_branch" g { node { input: "InceptionResnetV1/Conv2d_1a_3x3/Conv2D:0" input: "InceptionResnetV1/Repeat/block35_1/Branch_1/Conv2d_0b_3x3/BatchNorm/Const:0" input: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/beta/read__494" input: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/moving_mean/read__493" input: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/moving_variance/read__492" output: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm_1:0" name: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm_1" op_type: "BatchNormalization" attribute { name: "epsilon" f: 0.001 type: FLOAT } domain: "" } node { input: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm_1:0" output: "sub_graph_ending_node_Identity__519:0" name: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm_1__3884" op_type: "Transpose" attribute { name: "perm" ints: 0 ints: 2 ints: 3 ints: 1 type: INTS } } name: "tf2onnx__518" doc_string: "graph for InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond_If__520 else_branch" output { name: "sub_graph_ending_node_Identity__519:0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 79 } dim { dim_value: 79 } dim { dim_value: 32 } } } } } value_info { name: "InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/FusedBatchNorm_1:0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 32 } dim { dim_value: 79 } dim { dim_value: 79 } } } } } } type: GRAPH } domain: ""
done
--> Building model
done
--> Export rknn model
E build: The model has not been loaded, please load it first!
E export_rknn: RKNN model is None, please load & build model first!
Process finished with exit code 0
|
|