Toybrick

有没有人在RK3399PRO 开发板上试过 pixel2pixel?

ldol31627

中级会员

积分
310
楼主
发表于 2019-11-1 17:05:10    查看: 5441|回复: 13 | [复制链接]    打印 | 只看该作者
https://github.com/ChengBinJin/pix2pix-tensorflow
这个在RK3399PRO 一直没有成功,有没有人成功过?
回复

使用道具 举报

jefferyzhang

版主

积分
12953
沙发
发表于 2019-11-2 11:10:39 | 只看该作者
不成功的log贴上来呗,不然让别人怎么帮你- -#
回复

使用道具 举报

ldol31627

中级会员

积分
310
板凳
 楼主| 发表于 2019-11-4 14:34:17 | 只看该作者
本帖最后由 ldol31627 于 2019-11-4 14:35 编辑
jefferyzhang 发表于 2019-11-2 11:10
不成功的log贴上来呗,不然让别人怎么帮你- -#
rknn-toolkit: 1.2.1
报错:
--> Exporting model
done
--> Init runtime environment
D [rknn_init:1047] Input Tensors:
D [printRKNNTensor:960] index=0 name= n_dims=4 dims=[1 256 256 3] n_elems=196608 size=196608 fmt=NHWC type=UINT8 qnt_type=AFFINE fl=127 zp=127 scale=0.007843
D [rknn_init:1060] Output Tensors:
D [printRKNNTensor:960] index=0 name= n_dims=4 dims=[1 2 2 512] n_elems=2048 size=2048 fmt=NCHW type=UINT8 qnt_type=AFFINE fl=-123 zp=133 scale=0.013455
done
--> Running model
D [rknn_inputs_set:1262] 0 pass_through=0
D [rknn_inputs_set:1293] 0 input.type=3

I Queue cancelled.
ASSERT in NeuralNet.cpp.decompressKernel(1767): myNumOfVz == numOfVz
terminate called after throwing an instance of 'bool'
Aborted (core dumped)


==========log============
D Save log info to: ./build.log
D import clients finished
I Current TF Model producer version 0 min consumer version 0 bad consumer version []
I short-cut g_/e7_batchnorm/gammaut0 - g_/e7_batchnorm/batchnorm/mul:in1 skip g_/e7_batchnorm/gamma/read
I short-cut g_/e0_conv2d/biasesut0 - g_/e0_conv2d/BiasAdd:in1 skip g_/e0_conv2d/biases/read
I short-cut g_/d0_deconv2d/biasesut0 - g_/d0_deconv2d/BiasAdd:in1 skip g_/d0_deconv2d/biases/read
I short-cut g_/e7_conv2d/wut0 - g_/e7_conv2d/Conv2D:in1 skip g_/e7_conv2d/w/read
I short-cut g_/e3_batchnorm/gammaut0 - g_/e3_batchnorm/batchnorm/mul:in1 skip g_/e3_batchnorm/gamma/read
I short-cut g_/e1_batchnorm/betaut0 - g_/e1_batchnorm/batchnorm/sub:in0 skip g_/e1_batchnorm/beta/read
I short-cut g_/e2_batchnorm/gammaut0 - g_/e2_batchnorm/batchnorm/mul:in1 skip g_/e2_batchnorm/gamma/read
I short-cut g_/e1_batchnorm/gammaut0 - g_/e1_batchnorm/batchnorm/mul:in1 skip g_/e1_batchnorm/gamma/read
I short-cut g_/e3_batchnorm/betaut0 - g_/e3_batchnorm/batchnorm/sub:in0 skip g_/e3_batchnorm/beta/read
I short-cut g_/e6_batchnorm/gammaut0 - g_/e6_batchnorm/batchnorm/mul:in1 skip g_/e6_batchnorm/gamma/read
I short-cut g_/e1_conv2d/w:out0 - g_/e1_conv2d/Conv2D:in1 skip g_/e1_conv2d/w/read
I short-cut g_/e3_conv2d/w:out0 - g_/e3_conv2d/Conv2D:in1 skip g_/e3_conv2d/w/read
I short-cut g_/e2_batchnorm/beta:out0 - g_/e2_batchnorm/batchnorm/sub:in0 skip g_/e2_batchnorm/beta/read
I short-cut g_/e0_conv2d/w:out0 - g_/e0_conv2d/Conv2D:in1 skip g_/e0_conv2d/w/read
I short-cut g_/e7_batchnorm/beta:out0 - g_/e7_batchnorm/batchnorm/sub:in0 skip g_/e7_batchnorm/beta/read
I short-cut g_/e2_conv2d/biases:out0 - g_/e2_conv2d/BiasAdd:in1 skip g_/e2_conv2d/biases/read
I short-cut g_/e7_conv2d/biases:out0 - g_/e7_conv2d/BiasAdd:in1 skip g_/e7_conv2d/biases/read
I short-cut g_/e6_batchnorm/beta:out0 - g_/e6_batchnorm/batchnorm/sub:in0 skip g_/e6_batchnorm/beta/read
I short-cut g_/e3_conv2d/biases:out0 - g_/e3_conv2d/BiasAdd:in1 skip g_/e3_conv2d/biases/read
I short-cut g_/e5_conv2d/w:out0 - g_/e5_conv2d/Conv2D:in1 skip g_/e5_conv2d/w/read
I short-cut g_/e4_batchnorm/gamma:out0 - g_/e4_batchnorm/batchnorm/mul:in1 skip g_/e4_batchnorm/gamma/read
I short-cut g_/e1_conv2d/biases:out0 - g_/e1_conv2d/BiasAdd:in1 skip g_/e1_conv2d/biases/read
I short-cut g_/e2_conv2d/w:out0 - g_/e2_conv2d/Conv2D:in1 skip g_/e2_conv2d/w/read
I short-cut g_/e5_batchnorm/gamma:out0 - g_/e5_batchnorm/batchnorm/mul:in1 skip g_/e5_batchnorm/gamma/read
I short-cut g_/e6_conv2d/w:out0 - g_/e6_conv2d/Conv2D:in1 skip g_/e6_conv2d/w/read
I short-cut g_/e5_conv2d/biases:out0 - g_/e5_conv2d/BiasAdd:in1 skip g_/e5_conv2d/biases/read
I short-cut g_/e4_batchnorm/beta:out0 - g_/e4_batchnorm/batchnorm/sub:in0 skip g_/e4_batchnorm/beta/read
I short-cut g_/e4_conv2d/w:out0 - g_/e4_conv2d/Conv2D:in1 skip g_/e4_conv2d/w/read
I short-cut g_/e4_conv2d/biases:out0 - g_/e4_conv2d/BiasAdd:in1 skip g_/e4_conv2d/biases/read
I short-cut g_/e5_batchnorm/beta:out0 - g_/e5_batchnorm/batchnorm/sub:in0 skip g_/e5_batchnorm/beta/read
I short-cut g_/d0_deconv2d/w:out0 - g_/d0_deconv2d/conv2d_transpose:in1 skip g_/d0_deconv2d/w/read
I short-cut g_/e6_conv2d/biases:out0 - g_/e6_conv2d/BiasAdd:in1 skip g_/e6_conv2d/biases/read
I Have 1 tensors convert to const tensor
D Const tensors:
D ['g_/d0_deconv2d/conv2d_transpose/output_shape:out0']
I build output layer attach_g_/d0_deconv2d/BiasAdd:out0
I build input layer input:out0
D Try match BiasAdd g_/d0_deconv2d/BiasAdd
I Match dconvolution_biasadd [['g_/d0_deconv2d/BiasAdd', 'g_/d0_deconv2d/conv2d_transpose', 'g_/d0_deconv2d/biases', 'g_/d0_deconv2d/conv2d_transpose/output_shape_out_0_const', 'g_/d0_deconv2d/w']] [['BiasAdd', 'DConv', 'C', 'C_1', 'C_2']] to [['deconvolution']]
D Try match Relu g_/e7_relu
I Match relu [['g_/e7_relu']] [['Relu']] to [['relu']]
D Try match Add g_/e7_batchnorm/batchnorm/add_1
I Match instance_norm_2 [['g_/e7_batchnorm/batchnorm/add_1', 'g_/e7_batchnorm/batchnorm/mul_1', 'g_/e7_batchnorm/batchnorm/sub', 'g_/e7_batchnorm/batchnorm/mul', 'g_/e7_batchnorm/beta', 'g_/e7_batchnorm/batchnorm/mul_2', 'g_/e7_batchnorm/batchnorm/Rsqrt', 'g_/e7_batchnorm/gamma', 'g_/e7_batchnorm/moments/Squeeze', 'g_/e7_batchnorm/batchnorm/add', 'g_/e7_batchnorm/moments/mean', 'g_/e7_batchnorm/moments/Squeeze_1', 'g_/e7_batchnorm/batchnorm/add/y', 'g_/e7_batchnorm/moments/mean/reduction_indices', 'g_/e7_batchnorm/moments/variance', 'g_/e7_batchnorm/moments/SquaredDifference', 'g_/e7_batchnorm/moments/variance/reduction_indices', 'g_/e7_batchnorm/moments/StopGradient']] [['Add', 'Mul', 'Sub', 'Mul_1', 'C', 'Mul_2', 'Rsqrt', 'C_1', 'Squeeze', 'Add_1', 'Mean', 'Squeeze_1', 'C_2', 'C_3', 'Mean_1', 'SquaredDifference', 'C_4', 'StopGradient']] to [['instancenormalize']]
D Try match BiasAdd g_/e7_conv2d/BiasAdd
I Match convolution_biasadd [['g_/e7_conv2d/BiasAdd', 'g_/e7_conv2d/Conv2D', 'g_/e7_conv2d/biases', 'g_/e7_conv2d/w']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D Try match Maximum g_/e6_lrelu
I Match leakyrelu_2 [['g_/e6_lrelu', 'g_/mul_6', 'g_/mul_6/x']] [['Maximum', 'Mul', 'C']] to [['leakyrelu']]
D Try match Add g_/e6_batchnorm/batchnorm/add_1
I Match instance_norm_2 [['g_/e6_batchnorm/batchnorm/add_1', 'g_/e6_batchnorm/batchnorm/mul_1', 'g_/e6_batchnorm/batchnorm/sub', 'g_/e6_batchnorm/batchnorm/mul', 'g_/e6_batchnorm/beta', 'g_/e6_batchnorm/batchnorm/mul_2', 'g_/e6_batchnorm/batchnorm/Rsqrt', 'g_/e6_batchnorm/gamma', 'g_/e6_batchnorm/moments/Squeeze', 'g_/e6_batchnorm/batchnorm/add', 'g_/e6_batchnorm/moments/mean', 'g_/e6_batchnorm/moments/Squeeze_1', 'g_/e6_batchnorm/batchnorm/add/y', 'g_/e6_batchnorm/moments/mean/reduction_indices', 'g_/e6_batchnorm/moments/variance', 'g_/e6_batchnorm/moments/SquaredDifference', 'g_/e6_batchnorm/moments/variance/reduction_indices', 'g_/e6_batchnorm/moments/StopGradient']] [['Add', 'Mul', 'Sub', 'Mul_1', 'C', 'Mul_2', 'Rsqrt', 'C_1', 'Squeeze', 'Add_1', 'Mean', 'Squeeze_1', 'C_2', 'C_3', 'Mean_1', 'SquaredDifference', 'C_4', 'StopGradient']] to [['instancenormalize']]
D Try match BiasAdd g_/e6_conv2d/BiasAdd
I Match convolution_biasadd [['g_/e6_conv2d/BiasAdd', 'g_/e6_conv2d/Conv2D', 'g_/e6_conv2d/biases', 'g_/e6_conv2d/w']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D Try match Maximum g_/e5_lrelu
I Match leakyrelu_2 [['g_/e5_lrelu', 'g_/mul_5', 'g_/mul_5/x']] [['Maximum', 'Mul', 'C']] to [['leakyrelu']]
D Try match Add g_/e5_batchnorm/batchnorm/add_1
I Match instance_norm_2 [['g_/e5_batchnorm/batchnorm/add_1', 'g_/e5_batchnorm/batchnorm/mul_1', 'g_/e5_batchnorm/batchnorm/sub', 'g_/e5_batchnorm/batchnorm/mul', 'g_/e5_batchnorm/beta', 'g_/e5_batchnorm/batchnorm/mul_2', 'g_/e5_batchnorm/batchnorm/Rsqrt', 'g_/e5_batchnorm/gamma', 'g_/e5_batchnorm/moments/Squeeze', 'g_/e5_batchnorm/batchnorm/add', 'g_/e5_batchnorm/moments/mean', 'g_/e5_batchnorm/moments/Squeeze_1', 'g_/e5_batchnorm/batchnorm/add/y', 'g_/e5_batchnorm/moments/mean/reduction_indices', 'g_/e5_batchnorm/moments/variance', 'g_/e5_batchnorm/moments/SquaredDifference', 'g_/e5_batchnorm/moments/variance/reduction_indices', 'g_/e5_batchnorm/moments/StopGradient']] [['Add', 'Mul', 'Sub', 'Mul_1', 'C', 'Mul_2', 'Rsqrt', 'C_1', 'Squeeze', 'Add_1', 'Mean', 'Squeeze_1', 'C_2', 'C_3', 'Mean_1', 'SquaredDifference', 'C_4', 'StopGradient']] to [['instancenormalize']]
D Try match BiasAdd g_/e5_conv2d/BiasAdd
I Match convolution_biasadd [['g_/e5_conv2d/BiasAdd', 'g_/e5_conv2d/Conv2D', 'g_/e5_conv2d/biases', 'g_/e5_conv2d/w']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D Try match Maximum g_/e4_lrelu
I Match leakyrelu_2 [['g_/e4_lrelu', 'g_/mul_4', 'g_/mul_4/x']] [['Maximum', 'Mul', 'C']] to [['leakyrelu']]
D Try match Add g_/e4_batchnorm/batchnorm/add_1
I Match instance_norm_2 [['g_/e4_batchnorm/batchnorm/add_1', 'g_/e4_batchnorm/batchnorm/mul_1', 'g_/e4_batchnorm/batchnorm/sub', 'g_/e4_batchnorm/batchnorm/mul', 'g_/e4_batchnorm/beta', 'g_/e4_batchnorm/batchnorm/mul_2', 'g_/e4_batchnorm/batchnorm/Rsqrt', 'g_/e4_batchnorm/gamma', 'g_/e4_batchnorm/moments/Squeeze', 'g_/e4_batchnorm/batchnorm/add', 'g_/e4_batchnorm/moments/mean', 'g_/e4_batchnorm/moments/Squeeze_1', 'g_/e4_batchnorm/batchnorm/add/y', 'g_/e4_batchnorm/moments/mean/reduction_indices', 'g_/e4_batchnorm/moments/variance', 'g_/e4_batchnorm/moments/SquaredDifference', 'g_/e4_batchnorm/moments/variance/reduction_indices', 'g_/e4_batchnorm/moments/StopGradient']] [['Add', 'Mul', 'Sub', 'Mul_1', 'C', 'Mul_2', 'Rsqrt', 'C_1', 'Squeeze', 'Add_1', 'Mean', 'Squeeze_1', 'C_2', 'C_3', 'Mean_1', 'SquaredDifference', 'C_4', 'StopGradient']] to [['instancenormalize']]
D Try match BiasAdd g_/e4_conv2d/BiasAdd
I Match convolution_biasadd [['g_/e4_conv2d/BiasAdd', 'g_/e4_conv2d/Conv2D', 'g_/e4_conv2d/biases', 'g_/e4_conv2d/w']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D Try match Maximum g_/e3_lrelu
I Match leakyrelu_2 [['g_/e3_lrelu', 'g_/mul_3', 'g_/mul_3/x']] [['Maximum', 'Mul', 'C']] to [['leakyrelu']]
D Try match Add g_/e3_batchnorm/batchnorm/add_1
I Match instance_norm_2 [['g_/e3_batchnorm/batchnorm/add_1', 'g_/e3_batchnorm/batchnorm/mul_1', 'g_/e3_batchnorm/batchnorm/sub', 'g_/e3_batchnorm/batchnorm/mul', 'g_/e3_batchnorm/beta', 'g_/e3_batchnorm/batchnorm/mul_2', 'g_/e3_batchnorm/batchnorm/Rsqrt', 'g_/e3_batchnorm/gamma', 'g_/e3_batchnorm/moments/Squeeze', 'g_/e3_batchnorm/batchnorm/add', 'g_/e3_batchnorm/moments/mean', 'g_/e3_batchnorm/moments/Squeeze_1', 'g_/e3_batchnorm/batchnorm/add/y', 'g_/e3_batchnorm/moments/mean/reduction_indices', 'g_/e3_batchnorm/moments/variance', 'g_/e3_batchnorm/moments/SquaredDifference', 'g_/e3_batchnorm/moments/variance/reduction_indices', 'g_/e3_batchnorm/moments/StopGradient']] [['Add', 'Mul', 'Sub', 'Mul_1', 'C', 'Mul_2', 'Rsqrt', 'C_1', 'Squeeze', 'Add_1', 'Mean', 'Squeeze_1', 'C_2', 'C_3', 'Mean_1', 'SquaredDifference', 'C_4', 'StopGradient']] to [['instancenormalize']]
D Try match BiasAdd g_/e3_conv2d/BiasAdd
I Match convolution_biasadd [['g_/e3_conv2d/BiasAdd', 'g_/e3_conv2d/Conv2D', 'g_/e3_conv2d/biases', 'g_/e3_conv2d/w']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D Try match Maximum g_/e2_lrelu
I Match leakyrelu_2 [['g_/e2_lrelu', 'g_/mul_2', 'g_/mul_2/x']] [['Maximum', 'Mul', 'C']] to [['leakyrelu']]
D Try match Add g_/e2_batchnorm/batchnorm/add_1
I Match instance_norm_2 [['g_/e2_batchnorm/batchnorm/add_1', 'g_/e2_batchnorm/batchnorm/mul_1', 'g_/e2_batchnorm/batchnorm/sub', 'g_/e2_batchnorm/batchnorm/mul', 'g_/e2_batchnorm/beta', 'g_/e2_batchnorm/batchnorm/mul_2', 'g_/e2_batchnorm/batchnorm/Rsqrt', 'g_/e2_batchnorm/gamma', 'g_/e2_batchnorm/moments/Squeeze', 'g_/e2_batchnorm/batchnorm/add', 'g_/e2_batchnorm/moments/mean', 'g_/e2_batchnorm/moments/Squeeze_1', 'g_/e2_batchnorm/batchnorm/add/y', 'g_/e2_batchnorm/moments/mean/reduction_indices', 'g_/e2_batchnorm/moments/variance', 'g_/e2_batchnorm/moments/SquaredDifference', 'g_/e2_batchnorm/moments/variance/reduction_indices', 'g_/e2_batchnorm/moments/StopGradient']] [['Add', 'Mul', 'Sub', 'Mul_1', 'C', 'Mul_2', 'Rsqrt', 'C_1', 'Squeeze', 'Add_1', 'Mean', 'Squeeze_1', 'C_2', 'C_3', 'Mean_1', 'SquaredDifference', 'C_4', 'StopGradient']] to [['instancenormalize']]
D Try match BiasAdd g_/e2_conv2d/BiasAdd
I Match convolution_biasadd [['g_/e2_conv2d/BiasAdd', 'g_/e2_conv2d/Conv2D', 'g_/e2_conv2d/biases', 'g_/e2_conv2d/w']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D Try match Maximum g_/e1_lrelu
I Match leakyrelu_2 [['g_/e1_lrelu', 'g_/mul_1', 'g_/mul_1/x']] [['Maximum', 'Mul', 'C']] to [['leakyrelu']]
D Try match Add g_/e1_batchnorm/batchnorm/add_1
I Match instance_norm_2 [['g_/e1_batchnorm/batchnorm/add_1', 'g_/e1_batchnorm/batchnorm/mul_1', 'g_/e1_batchnorm/batchnorm/sub', 'g_/e1_batchnorm/batchnorm/mul', 'g_/e1_batchnorm/beta', 'g_/e1_batchnorm/batchnorm/mul_2', 'g_/e1_batchnorm/batchnorm/Rsqrt', 'g_/e1_batchnorm/gamma', 'g_/e1_batchnorm/moments/Squeeze', 'g_/e1_batchnorm/batchnorm/add', 'g_/e1_batchnorm/moments/mean', 'g_/e1_batchnorm/moments/Squeeze_1', 'g_/e1_batchnorm/batchnorm/add/y', 'g_/e1_batchnorm/moments/mean/reduction_indices', 'g_/e1_batchnorm/moments/variance', 'g_/e1_batchnorm/moments/SquaredDifference', 'g_/e1_batchnorm/moments/variance/reduction_indices', 'g_/e1_batchnorm/moments/StopGradient']] [['Add', 'Mul', 'Sub', 'Mul_1', 'C', 'Mul_2', 'Rsqrt', 'C_1', 'Squeeze', 'Add_1', 'Mean', 'Squeeze_1', 'C_2', 'C_3', 'Mean_1', 'SquaredDifference', 'C_4', 'StopGradient']] to [['instancenormalize']]
D Try match BiasAdd g_/e1_conv2d/BiasAdd
I Match convolution_biasadd [['g_/e1_conv2d/BiasAdd', 'g_/e1_conv2d/Conv2D', 'g_/e1_conv2d/biases', 'g_/e1_conv2d/w']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D Try match Maximum g_/e0_lrelu
I Match leakyrelu_2 [['g_/e0_lrelu', 'g_/mul', 'g_/mul/x']] [['Maximum', 'Mul', 'C']] to [['leakyrelu']]
D Try match BiasAdd g_/e0_conv2d/BiasAdd
I Match convolution_biasadd [['g_/e0_conv2d/BiasAdd', 'g_/e0_conv2d/Conv2D', 'g_/e0_conv2d/biases', 'g_/e0_conv2d/w']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D connect g_/e7_relu_3 0  ~ g_/d0_deconv2d/BiasAdd_2 0
D connect g_/e7_batchnorm/batchnorm/add_1_4 0  ~ g_/e7_relu_3 0
D connect g_/e7_conv2d/BiasAdd_5 0  ~ g_/e7_batchnorm/batchnorm/add_1_4 0
D connect g_/e6_lrelu_6 0  ~ g_/e7_conv2d/BiasAdd_5 0
D connect g_/e6_batchnorm/batchnorm/add_1_7 0  ~ g_/e6_lrelu_6 0
D connect g_/e6_conv2d/BiasAdd_8 0  ~ g_/e6_batchnorm/batchnorm/add_1_7 0
D connect g_/e5_lrelu_9 0  ~ g_/e6_conv2d/BiasAdd_8 0
D connect g_/e5_batchnorm/batchnorm/add_1_10 0  ~ g_/e5_lrelu_9 0
D connect g_/e5_conv2d/BiasAdd_11 0  ~ g_/e5_batchnorm/batchnorm/add_1_10 0
D connect g_/e4_lrelu_12 0  ~ g_/e5_conv2d/BiasAdd_11 0
D connect g_/e4_batchnorm/batchnorm/add_1_13 0  ~ g_/e4_lrelu_12 0
D connect g_/e4_conv2d/BiasAdd_14 0  ~ g_/e4_batchnorm/batchnorm/add_1_13 0
D connect g_/e3_lrelu_15 0  ~ g_/e4_conv2d/BiasAdd_14 0
D connect g_/e3_batchnorm/batchnorm/add_1_16 0  ~ g_/e3_lrelu_15 0
D connect g_/e3_conv2d/BiasAdd_17 0  ~ g_/e3_batchnorm/batchnorm/add_1_16 0
D connect g_/e2_lrelu_18 0  ~ g_/e3_conv2d/BiasAdd_17 0
D connect g_/e2_batchnorm/batchnorm/add_1_19 0  ~ g_/e2_lrelu_18 0
D connect g_/e2_conv2d/BiasAdd_20 0  ~ g_/e2_batchnorm/batchnorm/add_1_19 0
D connect g_/e1_lrelu_21 0  ~ g_/e2_conv2d/BiasAdd_20 0
D connect g_/e1_batchnorm/batchnorm/add_1_22 0  ~ g_/e1_lrelu_21 0
D connect g_/e1_conv2d/BiasAdd_23 0  ~ g_/e1_batchnorm/batchnorm/add_1_22 0
D connect g_/e0_lrelu_24 0  ~ g_/e1_conv2d/BiasAdd_23 0
D connect g_/e0_conv2d/BiasAdd_25 0  ~ g_/e0_lrelu_24 0
D connect attach_input/out0_1 0  ~ g_/e0_conv2d/BiasAdd_25 0
D connect g_/d0_deconv2d/BiasAdd_2 0  ~ attach_g_/d0_deconv2d/BiasAdd/out0_0 0
D Process attach_input/out0_1 ...
D RKNN output shape(input): (0 256 256 3)
D Process g_/e0_conv2d/BiasAdd_25 ...
D RKNN output shape(convolution): (0 128 128 64)
D Process g_/e0_lrelu_24 ...
D RKNN output shape(leakyrelu): (0 128 128 64)
D Process g_/e1_conv2d/BiasAdd_23 ...
D RKNN output shape(convolution): (0 64 64 128)
D Process g_/e1_batchnorm/batchnorm/add_1_22 ...
D RKNN output shape(instancenormalize): (0 64 64 128)
D Process g_/e1_lrelu_21 ...
D RKNN output shape(leakyrelu): (0 64 64 128)
D Process g_/e2_conv2d/BiasAdd_20 ...
D RKNN output shape(convolution): (0 32 32 256)
D Process g_/e2_batchnorm/batchnorm/add_1_19 ...
D RKNN output shape(instancenormalize): (0 32 32 256)
D Process g_/e2_lrelu_18 ...
D RKNN output shape(leakyrelu): (0 32 32 256)
D Process g_/e3_conv2d/BiasAdd_17 ...
D RKNN output shape(convolution): (0 16 16 512)
D Process g_/e3_batchnorm/batchnorm/add_1_16 ...
D RKNN output shape(instancenormalize): (0 16 16 512)
D Process g_/e3_lrelu_15 ...
D RKNN output shape(leakyrelu): (0 16 16 512)
D Process g_/e4_conv2d/BiasAdd_14 ...
D RKNN output shape(convolution): (0 8 8 512)
D Process g_/e4_batchnorm/batchnorm/add_1_13 ...
D RKNN output shape(instancenormalize): (0 8 8 512)
D Process g_/e4_lrelu_12 ...
D RKNN output shape(leakyrelu): (0 8 8 512)
D Process g_/e5_conv2d/BiasAdd_11 ...
D RKNN output shape(convolution): (0 4 4 512)
D Process g_/e5_batchnorm/batchnorm/add_1_10 ...
D RKNN output shape(instancenormalize): (0 4 4 512)
D Process g_/e5_lrelu_9 ...
D RKNN output shape(leakyrelu): (0 4 4 512)
D Process g_/e6_conv2d/BiasAdd_8 ...
D RKNN output shape(convolution): (0 2 2 512)
D Process g_/e6_batchnorm/batchnorm/add_1_7 ...
D RKNN output shape(instancenormalize): (0 2 2 512)
D Process g_/e6_lrelu_6 ...
D RKNN output shape(leakyrelu): (0 2 2 512)
D Process g_/e7_conv2d/BiasAdd_5 ...
D RKNN output shape(convolution): (0 1 1 512)
D Process g_/e7_batchnorm/batchnorm/add_1_4 ...
D RKNN output shape(instancenormalize): (0 1 1 512)
D Process g_/e7_relu_3 ...
D RKNN output shape(relu): (0 1 1 512)
D Process g_/d0_deconv2d/BiasAdd_2 ...
D RKNN output shape(deconvolution): (0 2 2 512)
D Process attach_g_/d0_deconv2d/BiasAdd/out0_0 ...
D RKNN output shape(output): (0 2 2 512)
I Build lane5 complete.
D Optimizing network with force_1d_tensor, swapper, merge_layer, auto_fill_bn, resize_nearest_transformer, auto_fill_multiply, merge_avgpool_conv1x1, auto_fill_zero_bias, proposal_opt_import
D Optimizing network with conv2d_big_kernel_size_transform
D Optimizing network with auto_fill_tf_quantize, align_quantize, broadcast_quantize, qnt_adjust_coef
I Generate input meta
I Load input meta
I Generate input meta
D import clients finished
D import clients finished
I Load net...
D Load layer attach_g_/d0_deconv2d/BiasAdd/out0_0 ...
D Load layer attach_input/out0_1 ...
D Load layer g_/d0_deconv2d/BiasAdd_2 ...
D Load layer g_/e7_relu_3 ...
D Load layer g_/e7_batchnorm/batchnorm/add_1_4 ...
D Load layer g_/e7_conv2d/BiasAdd_5 ...
D Load layer g_/e6_lrelu_6 ...
D Load layer g_/e6_batchnorm/batchnorm/add_1_7 ...
D Load layer g_/e6_conv2d/BiasAdd_8 ...
D Load layer g_/e5_lrelu_9 ...
D Load layer g_/e5_batchnorm/batchnorm/add_1_10 ...
D Load layer g_/e5_conv2d/BiasAdd_11 ...
D Load layer g_/e4_lrelu_12 ...
D Load layer g_/e4_batchnorm/batchnorm/add_1_13 ...
D Load layer g_/e4_conv2d/BiasAdd_14 ...
D Load layer g_/e3_lrelu_15 ...
D Load layer g_/e3_batchnorm/batchnorm/add_1_16 ...
D Load layer g_/e3_conv2d/BiasAdd_17 ...
D Load layer g_/e2_lrelu_18 ...
D Load layer g_/e2_batchnorm/batchnorm/add_1_19 ...
D Load layer g_/e2_conv2d/BiasAdd_20 ...
D Load layer g_/e1_lrelu_21 ...
D Load layer g_/e1_batchnorm/batchnorm/add_1_22 ...
D Load layer g_/e1_conv2d/BiasAdd_23 ...
D Load layer g_/e0_lrelu_24 ...
D Load layer g_/e0_conv2d/BiasAdd_25 ...
I Load net complete...
I Load data...
I Load input meta
D iterations: 1, batch_size: 35
I Quantization start...
D set up a quantize net
D Prebuild network ...
D Process attach_input/out0_1 ...
D RKNN output shape(input): (35 256 256 3)
D Process g_/e0_conv2d/BiasAdd_25 ...
D RKNN output shape(convolution): (35 128 128 64)
D Process g_/e0_lrelu_24 ...
D RKNN output shape(leakyrelu): (35 128 128 64)
D Process g_/e1_conv2d/BiasAdd_23 ...
D RKNN output shape(convolution): (35 64 64 128)
D Process g_/e1_batchnorm/batchnorm/add_1_22 ...
D RKNN output shape(instancenormalize): (35 64 64 128)
D Process g_/e1_lrelu_21 ...
D RKNN output shape(leakyrelu): (35 64 64 128)
D Process g_/e2_conv2d/BiasAdd_20 ...
D RKNN output shape(convolution): (35 32 32 256)
D Process g_/e2_batchnorm/batchnorm/add_1_19 ...
D RKNN output shape(instancenormalize): (35 32 32 256)
D Process g_/e2_lrelu_18 ...
D RKNN output shape(leakyrelu): (35 32 32 256)
D Process g_/e3_conv2d/BiasAdd_17 ...
D RKNN output shape(convolution): (35 16 16 512)
D Process g_/e3_batchnorm/batchnorm/add_1_16 ...
D RKNN output shape(instancenormalize): (35 16 16 512)
D Process g_/e3_lrelu_15 ...
D RKNN output shape(leakyrelu): (35 16 16 512)
D Process g_/e4_conv2d/BiasAdd_14 ...
D RKNN output shape(convolution): (35 8 8 512)
D Process g_/e4_batchnorm/batchnorm/add_1_13 ...
D RKNN output shape(instancenormalize): (35 8 8 512)
D Process g_/e4_lrelu_12 ...
D RKNN output shape(leakyrelu): (35 8 8 512)
D Process g_/e5_conv2d/BiasAdd_11 ...
D RKNN output shape(convolution): (35 4 4 512)
D Process g_/e5_batchnorm/batchnorm/add_1_10 ...
D RKNN output shape(instancenormalize): (35 4 4 512)
D Process g_/e5_lrelu_9 ...
D RKNN output shape(leakyrelu): (35 4 4 512)
D Process g_/e6_conv2d/BiasAdd_8 ...
D RKNN output shape(convolution): (35 2 2 512)
D Process g_/e6_batchnorm/batchnorm/add_1_7 ...
D RKNN output shape(instancenormalize): (35 2 2 512)
D Process g_/e6_lrelu_6 ...
D RKNN output shape(leakyrelu): (35 2 2 512)
D Process g_/e7_conv2d/BiasAdd_5 ...
D RKNN output shape(convolution): (35 1 1 512)
D Process g_/e7_batchnorm/batchnorm/add_1_4 ...
D RKNN output shape(instancenormalize): (35 1 1 512)
D Process g_/e7_relu_3 ...
D RKNN output shape(relu): (35 1 1 512)
D Process g_/d0_deconv2d/BiasAdd_2 ...
D RKNN output shape(deconvolution): (35 2 2 512)
D Process attach_g_/d0_deconv2d/BiasAdd/out0_0 ...
D RKNN output shape(output): (35 2 2 512)
I Build lane5 complete.
D *********** Setup input meta ***********
D import clients finished
D *********** Setup database (1) ***********
D Setup provider layer "text_input_layer":
D Lids: ['attach_input/out0_1']
D Shapes: [[35, 256, 256, 3]]
D Data types: ['float32']
D Sparse tensors: []
D Tensor names(H5FS only): []
D Add preprocess "[('reverse_channel', False), ('mean', [127.5, 127.5, 127.5]), ('scale', 0.00784313725490196)]" for "attach_input/out0_1"
D *********** Setup input meta complete ***********
D Process attach_input/out0_1 ...
D RKNN output shape(input): (35 256 256 3)
D Real output shape: (35, 256, 256, 3)
D Process g_/e0_conv2d/BiasAdd_25 ...
D RKNN output shape(convolution): (35 128 128 64)
D Real output shape: (35, 128, 128, 64)
D Process g_/e0_lrelu_24 ...
D RKNN output shape(leakyrelu): (35 128 128 64)
D Real output shape: (35, 128, 128, 64)
D Process g_/e1_conv2d/BiasAdd_23 ...
D RKNN output shape(convolution): (35 64 64 128)
D Real output shape: (35, 64, 64, 128)
D Process g_/e1_batchnorm/batchnorm/add_1_22 ...
D RKNN output shape(instancenormalize): (35 64 64 128)
D Real output shape: (35, 64, 64, 128)
D Process g_/e1_lrelu_21 ...
D RKNN output shape(leakyrelu): (35 64 64 128)
D Real output shape: (35, 64, 64, 128)
D Process g_/e2_conv2d/BiasAdd_20 ...
D RKNN output shape(convolution): (35 32 32 256)
D Real output shape: (35, 32, 32, 256)
D Process g_/e2_batchnorm/batchnorm/add_1_19 ...
D RKNN output shape(instancenormalize): (35 32 32 256)
D Real output shape: (35, 32, 32, 256)
D Process g_/e2_lrelu_18 ...
D RKNN output shape(leakyrelu): (35 32 32 256)
D Real output shape: (35, 32, 32, 256)
D Process g_/e3_conv2d/BiasAdd_17 ...
D RKNN output shape(convolution): (35 16 16 512)
D Real output shape: (35, 16, 16, 512)
D Process g_/e3_batchnorm/batchnorm/add_1_16 ...
D RKNN output shape(instancenormalize): (35 16 16 512)
D Real output shape: (35, 16, 16, 512)
D Process g_/e3_lrelu_15 ...
D RKNN output shape(leakyrelu): (35 16 16 512)
D Real output shape: (35, 16, 16, 512)
D Process g_/e4_conv2d/BiasAdd_14 ...
D RKNN output shape(convolution): (35 8 8 512)
D Real output shape: (35, 8, 8, 512)
D Process g_/e4_batchnorm/batchnorm/add_1_13 ...
D RKNN output shape(instancenormalize): (35 8 8 512)
D Real output shape: (35, 8, 8, 512)
D Process g_/e4_lrelu_12 ...
D RKNN output shape(leakyrelu): (35 8 8 512)
D Real output shape: (35, 8, 8, 512)
D Process g_/e5_conv2d/BiasAdd_11 ...
D RKNN output shape(convolution): (35 4 4 512)
D Real output shape: (35, 4, 4, 512)
D Process g_/e5_batchnorm/batchnorm/add_1_10 ...
D RKNN output shape(instancenormalize): (35 4 4 512)
D Real output shape: (35, 4, 4, 512)
D Process g_/e5_lrelu_9 ...
D RKNN output shape(leakyrelu): (35 4 4 512)
D Real output shape: (35, 4, 4, 512)
D Process g_/e6_conv2d/BiasAdd_8 ...
D RKNN output shape(convolution): (35 2 2 512)
D Real output shape: (35, 2, 2, 512)
D Process g_/e6_batchnorm/batchnorm/add_1_7 ...
D RKNN output shape(instancenormalize): (35 2 2 512)
D Real output shape: (35, 2, 2, 512)
D Process g_/e6_lrelu_6 ...
D RKNN output shape(leakyrelu): (35 2 2 512)
D Real output shape: (35, 2, 2, 512)
D Process g_/e7_conv2d/BiasAdd_5 ...
D RKNN output shape(convolution): (35 1 1 512)
D Real output shape: (35, 1, 1, 512)
D Process g_/e7_batchnorm/batchnorm/add_1_4 ...
D RKNN output shape(instancenormalize): (35 1 1 512)
D Real output shape: (35, 1, 1, 512)
D Process g_/e7_relu_3 ...
D RKNN output shape(relu): (35 1 1 512)
D Real output shape: (35, 1, 1, 512)
D Process g_/d0_deconv2d/BiasAdd_2 ...
D RKNN output shape(deconvolution): (35 2 2 512)
D Real output shape: (35, 2, 2, 512)
D Process attach_g_/d0_deconv2d/BiasAdd/out0_0 ...
D RKNN output shape(output): (35 2 2 512)
D Real output shape: (35, 2, 2, 512)
I Build lane5 complete.
I Running 1 iterations
D 0(100.00%), Queue size 0
D Quantize tensor @attach_input/out0_1:out0.
D Quantize tensor @g_/d0_deconv2d/BiasAdd_2:out0.
D Quantize tensor @g_/e7_relu_3:out0.
D Quantize tensor @g_/e7_batchnorm/batchnorm/add_1_4:out0.
D Quantize tensor @g_/e7_conv2d/BiasAdd_5:out0.
D Quantize tensor @g_/e6_lrelu_6:out0.
D Quantize tensor @g_/e6_batchnorm/batchnorm/add_1_7:out0.
D Quantize tensor @g_/e6_conv2d/BiasAdd_8:out0.
D Quantize tensor @g_/e5_lrelu_9:out0.
D Quantize tensor @g_/e5_batchnorm/batchnorm/add_1_10:out0.
D Quantize tensor @g_/e5_conv2d/BiasAdd_11:out0.
D Quantize tensor @g_/e4_lrelu_12:out0.
D Quantize tensor @g_/e4_batchnorm/batchnorm/add_1_13:out0.
D Quantize tensor @g_/e4_conv2d/BiasAdd_14:out0.
D Quantize tensor @g_/e3_lrelu_15:out0.
D Quantize tensor @g_/e3_batchnorm/batchnorm/add_1_16:out0.
D Quantize tensor @g_/e3_conv2d/BiasAdd_17:out0.
D Quantize tensor @g_/e2_lrelu_18:out0.
D Quantize tensor @g_/e2_batchnorm/batchnorm/add_1_19:out0.
D Quantize tensor @g_/e2_conv2d/BiasAdd_20:out0.
D Quantize tensor @g_/e1_lrelu_21:out0.
D Quantize tensor @g_/e1_batchnorm/batchnorm/add_1_22:out0.
D Quantize tensor @g_/e1_conv2d/BiasAdd_23:out0.
D Quantize tensor @g_/e0_lrelu_24:out0.
D Quantize tensor @g_/e0_conv2d/BiasAdd_25:out0.
D Quantize tensor @g_/d0_deconv2d/BiasAdd_2:weight.
D Quantize tensor @g_/e7_conv2d/BiasAdd_5:weight.
D Quantize tensor @g_/e6_conv2d/BiasAdd_8:weight.
D Quantize tensor @g_/e5_conv2d/BiasAdd_11:weight.
D Quantize tensor @g_/e4_conv2d/BiasAdd_14:weight.
D Quantize tensor @g_/e3_conv2d/BiasAdd_17:weight.
D Quantize tensor @g_/e2_conv2d/BiasAdd_20:weight.
D Quantize tensor @g_/e1_conv2d/BiasAdd_23:weight.
D Quantize tensor @g_/e0_conv2d/BiasAdd_25:weight.
D Quantize tensor @g_/d0_deconv2d/BiasAdd_2:bias.
D Quantize tensor @g_/e7_conv2d/BiasAdd_5:bias.
D Quantize tensor @g_/e6_conv2d/BiasAdd_8:bias.
D Quantize tensor @g_/e5_conv2d/BiasAdd_11:bias.
D Quantize tensor @g_/e4_conv2d/BiasAdd_14:bias.
D Quantize tensor @g_/e3_conv2d/BiasAdd_17:bias.
D Quantize tensor @g_/e2_conv2d/BiasAdd_20:bias.
D Quantize tensor @g_/e1_conv2d/BiasAdd_23:bias.
D Quantize tensor @g_/e0_conv2d/BiasAdd_25:bias.
I Clean.
D Optimizing network with align_quantize, broadcast_quantize, qnt_adjust_coef
D Quantize tensor(@attach_g_/d0_deconv2d/BiasAdd/out0_0:out0) with tensor(@g_/d0_deconv2d/BiasAdd_2:out0)
I Quantization complete.
I Clean.
D import clients finished
I Load net...
D Load layer attach_g_/d0_deconv2d/BiasAdd/out0_0 ...
D Load layer attach_input/out0_1 ...
D Load layer g_/d0_deconv2d/BiasAdd_2 ...
D Load layer g_/e7_relu_3 ...
D Load layer g_/e7_batchnorm/batchnorm/add_1_4 ...
D Load layer g_/e7_conv2d/BiasAdd_5 ...
D Load layer g_/e6_lrelu_6 ...
D Load layer g_/e6_batchnorm/batchnorm/add_1_7 ...
D Load layer g_/e6_conv2d/BiasAdd_8 ...
D Load layer g_/e5_lrelu_9 ...
D Load layer g_/e5_batchnorm/batchnorm/add_1_10 ...
D Load layer g_/e5_conv2d/BiasAdd_11 ...
D Load layer g_/e4_lrelu_12 ...
D Load layer g_/e4_batchnorm/batchnorm/add_1_13 ...
D Load layer g_/e4_conv2d/BiasAdd_14 ...
D Load layer g_/e3_lrelu_15 ...
D Load layer g_/e3_batchnorm/batchnorm/add_1_16 ...
D Load layer g_/e3_conv2d/BiasAdd_17 ...
D Load layer g_/e2_lrelu_18 ...
D Load layer g_/e2_batchnorm/batchnorm/add_1_19 ...
D Load layer g_/e2_conv2d/BiasAdd_20 ...
D Load layer g_/e1_lrelu_21 ...
D Load layer g_/e1_batchnorm/batchnorm/add_1_22 ...
D Load layer g_/e1_conv2d/BiasAdd_23 ...
D Load layer g_/e0_lrelu_24 ...
D Load layer g_/e0_conv2d/BiasAdd_25 ...
I Load net complete...
I Load data...
I Load quantization tensor table
I Load input meta
D Process attach_input/out0_1 ...
D RKNN output shape(input): (1 256 256 3)
D Process g_/e0_conv2d/BiasAdd_25 ...
D RKNN output shape(convolution): (1 128 128 64)
D Process g_/e0_lrelu_24 ...
D RKNN output shape(leakyrelu): (1 128 128 64)
D Process g_/e1_conv2d/BiasAdd_23 ...
D RKNN output shape(convolution): (1 64 64 128)
D Process g_/e1_batchnorm/batchnorm/add_1_22 ...
D RKNN output shape(instancenormalize): (1 64 64 128)
D Process g_/e1_lrelu_21 ...
D RKNN output shape(leakyrelu): (1 64 64 128)
D Process g_/e2_conv2d/BiasAdd_20 ...
D RKNN output shape(convolution): (1 32 32 256)
D Process g_/e2_batchnorm/batchnorm/add_1_19 ...
D RKNN output shape(instancenormalize): (1 32 32 256)
D Process g_/e2_lrelu_18 ...
D RKNN output shape(leakyrelu): (1 32 32 256)
D Process g_/e3_conv2d/BiasAdd_17 ...
D RKNN output shape(convolution): (1 16 16 512)
D Process g_/e3_batchnorm/batchnorm/add_1_16 ...
D RKNN output shape(instancenormalize): (1 16 16 512)
D Process g_/e3_lrelu_15 ...
D RKNN output shape(leakyrelu): (1 16 16 512)
D Process g_/e4_conv2d/BiasAdd_14 ...
D RKNN output shape(convolution): (1 8 8 512)
D Process g_/e4_batchnorm/batchnorm/add_1_13 ...
D RKNN output shape(instancenormalize): (1 8 8 512)
D Process g_/e4_lrelu_12 ...
D RKNN output shape(leakyrelu): (1 8 8 512)
D Process g_/e5_conv2d/BiasAdd_11 ...
D RKNN output shape(convolution): (1 4 4 512)
D Process g_/e5_batchnorm/batchnorm/add_1_10 ...
D RKNN output shape(instancenormalize): (1 4 4 512)
D Process g_/e5_lrelu_9 ...
D RKNN output shape(leakyrelu): (1 4 4 512)
D Process g_/e6_conv2d/BiasAdd_8 ...
D RKNN output shape(convolution): (1 2 2 512)
D Process g_/e6_batchnorm/batchnorm/add_1_7 ...
D RKNN output shape(instancenormalize): (1 2 2 512)
D Process g_/e6_lrelu_6 ...
D RKNN output shape(leakyrelu): (1 2 2 512)
D Process g_/e7_conv2d/BiasAdd_5 ...
D RKNN output shape(convolution): (1 1 1 512)
D Process g_/e7_batchnorm/batchnorm/add_1_4 ...
D RKNN output shape(instancenormalize): (1 1 1 512)
D Process g_/e7_relu_3 ...
D RKNN output shape(relu): (1 1 1 512)
D Process g_/d0_deconv2d/BiasAdd_2 ...
D RKNN output shape(deconvolution): (1 2 2 512)
D Process attach_g_/d0_deconv2d/BiasAdd/out0_0 ...
D RKNN output shape(output): (1 2 2 512)
I Build lane5 complete.
I Config File "/home/zyj/.conda/envs/rockchip/lib/python3.6/site-packages/rknn/base/RK1808_PID0X82" load/generated successfully
I Initialzing network optimizer by /home/zyj/.conda/envs/rockchip/lib/python3.6/site-packages/rknn/base/RK1808_PID0X82 ...
D Optimizing network with t2c_tf2caffe
I Start T2C Switcher...
D insert permute g_/e0_conv2d/BiasAdd_25_RKNN_mark_perm_26 before g_/e0_conv2d/BiasAdd_25
D insert permute attach_g_/d0_deconv2d/BiasAdd/out0_0_RKNN_mark_perm_27 before attach_g_/d0_deconv2d/BiasAdd/out0_0
I End T2C Switcher...
D Optimizing network with qnt_adjust_coef, multiply_transform, add_extra_io, format_input_ops, auto_fill_zero_bias, conv_kernel_transform, twod_op_transform, conv_1xn_transform, strip_op, extend_unstack_split, extend_batchnormalize, swapper, merge_layer, transform_layer, proposal_opt, broadcast_op, strip_op, auto_fill_reshape_zero, adjust_output_attrs
D Optimizing network with c2drv_convert_axis, c2drv_convert_shape, c2drv_convert_array, c2drv_cast_dtype
I Building data ...
I Packing data ...
D Packing g_/d0_deconv2d/BiasAdd_2 ...
D Quantize @g_/d0_deconv2d/BiasAdd_2:bias to asymmetric_quantized.
D Quantize @g_/d0_deconv2d/BiasAdd_2:weight to asymmetric_quantized.
D Packing g_/e0_conv2d/BiasAdd_25 ...
D Quantize @g_/e0_conv2d/BiasAdd_25:bias to asymmetric_quantized.
D Quantize @g_/e0_conv2d/BiasAdd_25:weight to asymmetric_quantized.
D Packing g_/e1_batchnorm/batchnorm/add_1_22 ...
D Packing g_/e1_conv2d/BiasAdd_23 ...
D Quantize @g_/e1_conv2d/BiasAdd_23:bias to asymmetric_quantized.
D Quantize @g_/e1_conv2d/BiasAdd_23:weight to asymmetric_quantized.
D Packing g_/e2_batchnorm/batchnorm/add_1_19 ...
D Packing g_/e2_conv2d/BiasAdd_20 ...
D Quantize @g_/e2_conv2d/BiasAdd_20:bias to asymmetric_quantized.
D Quantize @g_/e2_conv2d/BiasAdd_20:weight to asymmetric_quantized.
D Packing g_/e3_batchnorm/batchnorm/add_1_16 ...
D Packing g_/e3_conv2d/BiasAdd_17 ...
D Quantize @g_/e3_conv2d/BiasAdd_17:bias to asymmetric_quantized.
D Quantize @g_/e3_conv2d/BiasAdd_17:weight to asymmetric_quantized.
D Packing g_/e4_batchnorm/batchnorm/add_1_13 ...
D Packing g_/e4_conv2d/BiasAdd_14 ...
D Quantize @g_/e4_conv2d/BiasAdd_14:bias to asymmetric_quantized.
D Quantize @g_/e4_conv2d/BiasAdd_14:weight to asymmetric_quantized.
D Packing g_/e5_batchnorm/batchnorm/add_1_10 ...
D Packing g_/e5_conv2d/BiasAdd_11 ...
D Quantize @g_/e5_conv2d/BiasAdd_11:bias to asymmetric_quantized.
D Quantize @g_/e5_conv2d/BiasAdd_11:weight to asymmetric_quantized.
D Packing g_/e6_batchnorm/batchnorm/add_1_7 ...
D Packing g_/e6_conv2d/BiasAdd_8 ...
D Quantize @g_/e6_conv2d/BiasAdd_8:bias to asymmetric_quantized.
D Quantize @g_/e6_conv2d/BiasAdd_8:weight to asymmetric_quantized.
D Packing g_/e7_batchnorm/batchnorm/add_1_4 ...
D Packing g_/e7_conv2d/BiasAdd_5 ...
D Quantize @g_/e7_conv2d/BiasAdd_5:bias to asymmetric_quantized.
D Quantize @g_/e7_conv2d/BiasAdd_5:weight to asymmetric_quantized.
I Build config finished.
I Queue cancelled.
回复

使用道具 举报

jefferyzhang

版主

积分
12953
地板
发表于 2019-11-4 19:09:28 | 只看该作者
看过去所有OP都支持,但是崩溃了,可以把这个模型和转换代码发上来么,我们可以提交一个bug给相关人员处理
回复

使用道具 举报

ldol31627

中级会员

积分
310
5#
 楼主| 发表于 2019-11-5 09:32:43 | 只看该作者
jefferyzhang 发表于 2019-11-4 19:09
看过去所有OP都支持,但是崩溃了,可以把这个模型和转换代码发上来么,我们可以提交一个bug给相关人员处理 ...

https://pan.baidu.com/s/1anR0I8htvGt1k3T_CU3pPQ
回复

使用道具 举报

jefferyzhang

版主

积分
12953
6#
发表于 2019-11-5 09:36:02 | 只看该作者
好的,已经反馈给相关人员了
回复

使用道具 举报

jefferyzhang

版主

积分
12953
7#
发表于 2019-11-5 09:37:49 | 只看该作者
再多问一句,这个模型你用tensorflow加载运行过么?
回复

使用道具 举报

ldol31627

中级会员

积分
310
8#
 楼主| 发表于 2019-11-6 08:29:35 | 只看该作者
jefferyzhang 发表于 2019-11-5 09:37
再多问一句,这个模型你用tensorflow加载运行过么?

还没有直接调用过 pb 文件, 我试一下。
回复

使用道具 举报

ldol31627

中级会员

积分
310
9#
 楼主| 发表于 2019-11-6 10:03:28 | 只看该作者
jefferyzhang 发表于 2019-11-5 09:37
再多问一句,这个模型你用tensorflow加载运行过么?

我刚才试了一下,用 tensorflow 直接加载pb文件可以正常得到结果
回复

使用道具 举报

jefferyzhang

版主

积分
12953
10#
发表于 2019-11-6 10:33:39 | 只看该作者
ldol31627 发表于 2019-11-6 10:03
我刚才试了一下,用 tensorflow 直接加载pb文件可以正常得到结果

嗯,好的,等我们相关工程师debug下了。。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

产品中心 购买渠道 开源社区 Wiki教程 资料下载 关于Toybrick


快速回复 返回顶部 返回列表