Toybrick

rk3399pro模型转换问题,求解决

kimizij

新手上路

积分
32
发表于 2019-3-8 08:10:20    查看: 8711|回复: 1 | [复制链接]    打印 | 显示全部楼层
我的tf  pb模型转换失败,log如下(好像是tf.add有问题):求大神们帮解决下
--> Loading model
D import clients finished
I Current TF Model producer version 0 min consumer version 0 bad consumer version []
I Have 0 tensors convert to const tensor
[]
I build output layer attach_Relu_2ut0
I build input layer inputut0
I Try match Relu Relu_2
I Match [['Relu_2']] [['Relu']] to [['relu']]
I Try match Add Add
I Match [['Add']] [['Add']] to [['add']]
I Try match BiasAdd BiasAdd_4
I Match [['BiasAdd_4', 'Conv2D_4', 'BiasAdd_4/bias', 'Conv2D_4/filter']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
I Try match BiasAdd BiasAdd_3
I Match [['BiasAdd_3', 'Conv2D_3', 'BiasAdd_3/bias', 'Conv2D_3/filter']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
I Try match MaxPool MaxPool
I Match [['MaxPool']] [['MaxPool']] to [['pooling']]
I Try match Relu Relu_1
I Match [['Relu_1']] [['Relu']] to [['relu']]
I Try match Pad Pad_1
I Match [['Pad_1', 'Pad_1/paddings']] [['Pad', 'C']] to [['pad']]
I Try match BiasAdd BiasAdd_2
I Match [['BiasAdd_2', 'Conv2D_2', 'BiasAdd_2/bias', 'Conv2D_2/filter']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
I Try match Relu Relu
I Match [['Relu']] [['Relu']] to [['relu']]
I Try match Relu res2_0_branch2a
I Match [['res2_0_branch2a']] [['Relu']] to [['relu']]
I Try match BiasAdd BiasAdd
I Match [['BiasAdd', 'Conv2D', 'BiasAdd/bias', 'Conv2D/filter']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
I Try match BiasAdd BiasAdd_1
I Match [['BiasAdd_1', 'Conv2D_1', 'BiasAdd_1/bias', 'Conv2D_1/filter']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
I Try match Pad Pad
I Match [['Pad', 'Pad/paddings']] [['Pad', 'C']] to [['pad']]
D connect Add_3 0  ~ Relu_2_2 0
D connect BiasAdd_3_5 0  ~ Add_3 0
D connect BiasAdd_4_4 0  ~ Add_3 1
D connect MaxPool_6 0  ~ BiasAdd_4_4 0
D connect Relu_1_7 0  ~ BiasAdd_3_5 0
D connect Pad_1_8 0  ~ MaxPool_6 0
D connect BiasAdd_2_9 0  ~ Relu_1_7 0
D connect Relu_10 0  ~ Pad_1_8 0
D connect res2_0_branch2a_11 0  ~ BiasAdd_2_9 0
D connect BiasAdd_12 0  ~ Relu_10 0
D connect BiasAdd_1_13 0  ~ res2_0_branch2a_11 0
D connect Pad_14 0  ~ BiasAdd_12 0
D connect MaxPool_6 0  ~ BiasAdd_1_13 0
D connect attach_input/out0_1 0  ~ Pad_14 0
D connect Relu_2_2 0  ~ attach_Relu_2/out0_0 0
D Process attach_input/out0_1 ...
D RKNN output shape(input): (0 640 960 3)
D Process Pad_14 ...
D RKNN output shape(pad): (0 646 966 3)
D Process BiasAdd_12 ...
D RKNN output shape(convolution): (0 320 480 64)
D Process Relu_10 ...
D RKNN output shape(relu): (0 320 480 64)
D Process Pad_1_8 ...
D RKNN output shape(pad): (0 322 482 64)
D Process MaxPool_6 ...
D RKNN output shape(Pooling): (0 160 240 64)
D Process BiasAdd_1_13 ...
D RKNN output shape(convolution): (0 160 240 64)
D Process res2_0_branch2a_11 ...
D RKNN output shape(relu): (0 160 240 64)
D Process BiasAdd_2_9 ...
D RKNN output shape(convolution): (0 160 240 64)
D Process Relu_1_7 ...
D RKNN output shape(relu): (0 160 240 64)
D Process BiasAdd_3_5 ...
D RKNN output shape(convolution): (0 160 240 256)
D Process BiasAdd_4_4 ...
D RKNN output shape(convolution): (0 160 240 256)
D Process Add_3 ...
D RKNN output shape(Add): (0 160 240 256)
D Process Relu_2_2 ...
D RKNN output shape(relu): (0 160 240 256)
D Process attach_Relu_2/out0_0 ...
D RKNN output shape(output): (0 160 240 256)
I Build final_model complete.
D Optimizing network with force_1d_tensor, swapper, merge_layer, auto_fill_bn, avg_pool_transform, auto_fill_zero_bias, proposal_opt_import, import_strip_op
D Optimizing network with conv2d_big_kernel_size_transform
D Optimizing network with auto_fill_tf_quantize, qnt_single_transmit_quantize, align_quantize, broadcast_quantize, qnt_adjust_coef
done
--> Building model
D import clients finished
I Loading network...
I Load net...
D Load layer attach_Relu_2/out0_0 ...
D Load layer attach_input/out0_1 ...
D Load layer Relu_2_2 ...
D Load layer Add_3 ...
D Load layer BiasAdd_4_4 ...
D Load layer BiasAdd_3_5 ...
D Load layer MaxPool_6 ...
D Load layer Relu_1_7 ...
D Load layer Pad_1_8 ...
D Load layer BiasAdd_2_9 ...
D Load layer Relu_10 ...
D Load layer res2_0_branch2a_11 ...
D Load layer BiasAdd_12 ...
D Load layer BiasAdd_1_13 ...
D Load layer Pad_14 ...
I Load net complete...
I Load data...
D Process attach_input/out0_1 ...
D RKNN output shape(input): (0 640 960 3)
D Process Pad_14 ...
D RKNN output shape(pad): (0 646 966 3)
D Process BiasAdd_12 ...
D RKNN output shape(convolution): (0 320 480 64)
D Process Relu_10 ...
D RKNN output shape(relu): (0 320 480 64)
D Process Pad_1_8 ...
D RKNN output shape(pad): (0 322 482 64)
D Process MaxPool_6 ...
D RKNN output shape(Pooling): (0 160 240 64)
D Process BiasAdd_1_13 ...
D RKNN output shape(convolution): (0 160 240 64)
D Process res2_0_branch2a_11 ...
D RKNN output shape(relu): (0 160 240 64)
D Process BiasAdd_2_9 ...
D RKNN output shape(convolution): (0 160 240 64)
D Process Relu_1_7 ...
D RKNN output shape(relu): (0 160 240 64)
D Process BiasAdd_3_5 ...
D RKNN output shape(convolution): (0 160 240 256)
D Process BiasAdd_4_4 ...
D RKNN output shape(convolution): (0 160 240 256)
D Process Add_3 ...
D RKNN output shape(Add): (0 160 240 256)
D Process Relu_2_2 ...
D RKNN output shape(relu): (0 160 240 256)
D Process attach_Relu_2/out0_0 ...
D RKNN output shape(output): (0 160 240 256)
I Build final_model complete.
I Initialzing network optimizer by Default ...
D Optimizing network with add_lstmunit_io, auto_fill_zero_bias, conv_kernel_transform, twod_op_transform, conv_1xn_transform, strip_op, extend_add_to_conv2d, extend_fc_to_conv2d, extend_unstack_split, swapper, merge_layer, transform_layer, proposal_opt, strip_op, auto_fill_reshape_zero, adjust_output_attrs
W extend Add_3 to add couldn't calculate core, do not extend.
D Merge ['BiasAdd_2_9', 'Relu_1_7'] (convolutionrelu)
D Merge ['BiasAdd_12', 'Relu_10'] (convolutionrelu)
D Merge ['BiasAdd_1_13', 'res2_0_branch2a_11'] (convolutionrelu)
D Transform BiasAdd_4_4 to convolutionrelu.
D Transform BiasAdd_3_5 to convolutionrelu.
D Optimizing network with t2c_insert_permute, t2c_calibrate_flow_shapes, t2c_convert_axis, t2c_convert_shape, t2c_convert_array
D Optimizing network with c2drv_convert_axis, c2drv_convert_shape, c2drv_convert_array, c2drv_cast_dtype
I Building data ...
D Packing BiasAdd_12_Relu_10 ...
D Packing BiasAdd_1_13_res2_0_branch2a_11 ...
D Packing BiasAdd_2_9_Relu_1_7 ...
D Packing trans_BiasAdd_3_5 ...
D Packing trans_BiasAdd_4_4 ...
D nn_param.pad.front_size=pad_front_1
D nn_param.pad.back_size=pad_back_1
D nn_param.pad.dim_num=4
D nn_param.pad.const_val=0
D nn_param.pad.mode=RK_NN_PAD_MODE_CONSTANT
D nn_param.conv2d.ksize[0]=7
D nn_param.conv2d.ksize[1]=7
D nn_param.conv2d.weights=64
D nn_param.conv2d.stride[0]=2
D nn_param.conv2d.stride[1]=2
D nn_param.conv2d.pad[0]=0
D nn_param.conv2d.pad[1]=0
D nn_param.conv2d.pad[2]=0
D nn_param.conv2d.pad[3]=0
D nn_param.conv2d.group=1
D nn_param.conv2d.dilation[0]=1
D nn_param.conv2d.dilation[1]=1
D nn_param.conv2d.multiplier=0
D vx_param.has_relu=TRUE
D vx_param.overflow_policy=VX_CONVERT_POLICY_WRAP
D vx_param.rounding_policy=VX_ROUND_POLICY_TO_ZERO
D vx_param.down_scale_size_rounding=VX_CONVOLUTIONAL_NETWORK_DS_SIZE_ROUNDING_FLOOR
D nn_param.pad.front_size=pad_front_2
D nn_param.pad.back_size=pad_back_2
D nn_param.pad.dim_num=4
D nn_param.pad.const_val=0
D nn_param.pad.mode=RK_NN_PAD_MODE_CONSTANT
D nn_param.pool.ksize[0]=3
D nn_param.pool.ksize[1]=3
D nn_param.pool.stride[0]=2
D nn_param.pool.stride[1]=2
D nn_param.pool.pad[0]=0
D nn_param.pool.pad[1]=0
D nn_param.pool.pad[2]=0
D nn_param.pool.pad[3]=0
D nn_param.pool.type=VX_CONVOLUTIONAL_NETWORK_POOLING_MAX
D nn_param.pool.round_type=RK_NN_ROUND_FLOOR
D vx_param.down_scale_size_rounding=VX_CONVOLUTIONAL_NETWORK_DS_SIZE_ROUNDING_FLOOR
D nn_param.conv2d.ksize[0]=1
D nn_param.conv2d.ksize[1]=1
D nn_param.conv2d.weights=64
D nn_param.conv2d.stride[0]=1
D nn_param.conv2d.stride[1]=1
D nn_param.conv2d.pad[0]=0
D nn_param.conv2d.pad[1]=0
D nn_param.conv2d.pad[2]=0
D nn_param.conv2d.pad[3]=0
D nn_param.conv2d.group=1
D nn_param.conv2d.dilation[0]=1
D nn_param.conv2d.dilation[1]=1
D nn_param.conv2d.multiplier=0
D vx_param.has_relu=TRUE
D vx_param.overflow_policy=VX_CONVERT_POLICY_WRAP
D vx_param.rounding_policy=VX_ROUND_POLICY_TO_ZERO
D vx_param.down_scale_size_rounding=VX_CONVOLUTIONAL_NETWORK_DS_SIZE_ROUNDING_FLOOR
D nn_param.conv2d.ksize[0]=1
D nn_param.conv2d.ksize[1]=1
D nn_param.conv2d.weights=256
D nn_param.conv2d.stride[0]=1
D nn_param.conv2d.stride[1]=1
D nn_param.conv2d.pad[0]=0
D nn_param.conv2d.pad[1]=0
D nn_param.conv2d.pad[2]=0
D nn_param.conv2d.pad[3]=0
D nn_param.conv2d.group=1
D nn_param.conv2d.dilation[0]=1
D nn_param.conv2d.dilation[1]=1
D nn_param.conv2d.multiplier=0
D vx_param.has_relu=FALSE
D vx_param.overflow_policy=VX_CONVERT_POLICY_WRAP
D vx_param.rounding_policy=VX_ROUND_POLICY_TO_ZERO
D vx_param.down_scale_size_rounding=VX_CONVOLUTIONAL_NETWORK_DS_SIZE_ROUNDING_FLOOR
D nn_param.conv2d.ksize[0]=3
D nn_param.conv2d.ksize[1]=3
D nn_param.conv2d.weights=64
D nn_param.conv2d.stride[0]=1
D nn_param.conv2d.stride[1]=1
D nn_param.conv2d.pad[0]=1
D nn_param.conv2d.pad[1]=1
D nn_param.conv2d.pad[2]=1
D nn_param.conv2d.pad[3]=1
D nn_param.conv2d.group=1
D nn_param.conv2d.dilation[0]=1
D nn_param.conv2d.dilation[1]=1
D nn_param.conv2d.multiplier=0
D vx_param.has_relu=TRUE
D vx_param.overflow_policy=VX_CONVERT_POLICY_WRAP
D vx_param.rounding_policy=VX_ROUND_POLICY_TO_ZERO
D vx_param.down_scale_size_rounding=VX_CONVOLUTIONAL_NETWORK_DS_SIZE_ROUNDING_FLOOR
D nn_param.conv2d.ksize[0]=1
D nn_param.conv2d.ksize[1]=1
D nn_param.conv2d.weights=256
D nn_param.conv2d.stride[0]=1
D nn_param.conv2d.stride[1]=1
D nn_param.conv2d.pad[0]=0
D nn_param.conv2d.pad[1]=0
D nn_param.conv2d.pad[2]=0
D nn_param.conv2d.pad[3]=0
D nn_param.conv2d.group=1
D nn_param.conv2d.dilation[0]=1
D nn_param.conv2d.dilation[1]=1
D nn_param.conv2d.multiplier=0
D vx_param.has_relu=FALSE
D vx_param.overflow_policy=VX_CONVERT_POLICY_WRAP
D vx_param.rounding_policy=VX_ROUND_POLICY_TO_ZERO
D vx_param.down_scale_size_rounding=VX_CONVOLUTIONAL_NETWORK_DS_SIZE_ROUNDING_FLOOR
I Build config finished.
done

回复

使用道具 举报

zhangzj

超级版主

积分
1109
发表于 2019-3-8 16:15:54 | 显示全部楼层
这个log上面没有看到报错信息
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

产品中心 购买渠道 开源社区 Wiki教程 资料下载 关于Toybrick


快速回复 返回顶部 返回列表