Toybrick

标题: deeplabv3+ tensorflow 模型移植讨论 [打印本页]

作者: chuyee    时间: 2019-3-11 16:45
标题: deeplabv3+ tensorflow 模型移植讨论
本帖最后由 chuyee 于 2019-3-27 05:04 编辑

使用正确的input和output layer后模型已经可以成功转化成rknn。目前遇到精度和速度的问题,欢迎大家讨论。

精度:与原有模型相比,经rknn量化后的结果检测出来的边界模糊,见下图。

速度:
使用ArgMax作为输出层inference速度仅能达到3.8FPS。若使用BiasAdd层(省略掉两个bilinear resize和ArgMax)速度则可达到13.5FPS. 不知是不是rknn对这两个算子未优化好所致?

以下是原帖
----------
Hi,

I met below problem when converting tensorflow deeplabv3+ (mobilenet_v2 backend) model from http://download.tensorflow.org/m ... _2018_01_29.tar.gz. Please help to locate what's the problem. Also I saw someone in this forum had successfully converted deeplabv3 model. Please comment or let me know which model can be converted to rknn.

--> config model
done
--> Loading model
D import clients finished
I Current TF Model producer version 0 min consumer version 0 bad consumer version []
I Disconnect Assert_3/Assertut4096 and stack_3:in3
I Disconnect Assert/Assertut4096 and sub_2:in2
I Disconnect Assert_1/Assertut4096 and sub_3/y:in0
I Disconnect Assert_2/Assertut4096 and sub_5/y:in0
I short-cut MobilenetV2/expanded_conv_11/depthwise/Relu6ut0 - MobilenetV2/expanded_conv_11/project/Conv2D:in0 skip MobilenetV2/expanded_conv_11/depthwise_output
I short-cut MobilenetV2/expanded_conv_13/expand/BatchNorm/moving_varianceut0 - MobilenetV2/expanded_conv_13/expand/BatchNorm/FusedBatchNorm:in4 skip MobilenetV2/expanded_conv_13/expand/BatchNorm/moving_variance/read
I short-cut MobilenetV2/expanded_conv_12/addut0 - MobilenetV2/expanded_conv_13/input:in0 skip MobilenetV2/expanded_conv_12/output
...

I Try match FusedBatchNorm MobilenetV2/Conv/BatchNorm/FusedBatchNorm
I Match [['MobilenetV2/Conv/BatchNorm/FusedBatchNorm', 'MobilenetV2/Conv/BatchNorm/gamma', 'MobilenetV2/Conv/BatchNorm/beta', 'MobilenetV2/Conv/BatchNorm/moving_mean', 'MobilenetV2/Conv/BatchNorm/moving_variance']] [['FusedBatchNorm', 'C', 'C_1', 'C_2', 'C_3']] to [['batchnormalize']]
I Try match Conv2D MobilenetV2/Conv/Conv2D
I Match [['MobilenetV2/Conv/Conv2D', 'MobilenetV2/Conv/weights']] [['Conv', 'C']] to [['convolution']]
I Try match Sub sub_7
W Not match node sub_7 Sub
E Catch exception when loading tensorflow model: ./deeplabv3_mnv2_pascal.pb!
T Traceback (most recent call last):
T   File "rknn/api/rknn_base.py", line 191, in rknn.api.rknn_base.RKNNBase.load_tensorflow
T   File "rknn/base/rknnlib/converter/convert_tf.py", line 533, in rknn.base.rknnlib.converter.convert_tf.convert_tf.match_paragraph_and_param
T   File "rknn/base/rknnlib/converter/convert_tf.py", line 438, in rknn.base.rknnlib.converter.convert_tf.convert_tf._tf_push_ready_node
T TypeError: 'NoneType' object is not iterable
Load deeplabv3_mnv2_pascal failed!

I tried both "sub_7" and "MobilenetV2/Conv/Conv2D" as input and "ArgMax" as output.




作者: elooon    时间: 2019-3-12 15:51
the link is unavailable
作者: chuyee    时间: 2019-3-12 16:17
Remove the last dot '.' will work: http://download.tensorflow.org/m ... g_2018_01_29.tar.gz
作者: chuyee    时间: 2019-3-26 03:34
本帖最后由 chuyee 于 2019-3-26 03:35 编辑

Set "MobilenetV2/Conv/Conv2D" as input and "ArgMax" as output fixed the problem. However the rknn output is not as good as the original one. See below results.
Any ideas what might cause the problem?


Original image
[attach]186[/attach]
Result from Tensorflow on PC
[attach]191[/attach]
Result from Tensorflow on rknn
[attach]188[/attach]

作者: chuyee    时间: 2019-3-28 13:44
Found the problem. It's caused by rknn's implmentation of tensorflow ResizeBilinear() function. It's both inaccurate (as the picture illustrates) and slow (~200ms, yes it's ms, not us. See my other post for details). My workaround solution is to bypass the layer ResizeBilinear and after and then implement them in CPU. With parallelism between NPU and CPU, this is still faster than doing ResizeBilinear by NPU.
作者: metaphor22    时间: 2019-5-7 09:47
Hello, have you met the problem about MemoryError?When I did the quantization, this error just raised and followed by another information that told me RKNN model is None. I also used Deeplab V3+ with the backbone of MobileNet V2. Thanks for your reply in advance!
作者: raymond    时间: 2019-7-4 14:08
@chuyee 转换模型用的rknn-toolkit 是哪个版本的?
作者: zw1221    时间: 2020-3-2 09:23
您好,我这两天也在移植deeplabv3+tensorflow 的模型,用的官方的deeplabv3 mobilenetV2这个模型,模型可以成功转换成RKNN,可是在inference 的时候,就是一直卡死在 inference,30多分钟后都没有输出done,也没有报错,,请问大神这是什么问题造成的。?
作者: tomyhome    时间: 2020-3-16 10:14
zw1221 发表于 2020-3-2 09:23
您好,我这两天也在移植deeplabv3+tensorflow 的模型,用的官方的deeplabv3 mobilenetV2这个模型,模型可以 ...

你好,我现在连转换模型都还没跑通,在虚拟机中跑的转换模型,如果选择量化打开,就跑不了,如果量化关闭,在主板推理的时候就直接错误,不知道问题在哪里
作者: zhaomr    时间: 2020-11-6 13:23
zw1221 发表于 2020-3-2 09:23
您好,我这两天也在移植deeplabv3+tensorflow 的模型,用的官方的deeplabv3 mobilenetV2这个模型,模型可以 ...

你好,这个问题解决了吗? 是因为什么导致的?




欢迎光临 Toybrick (https://t.rock-chips.com/) Powered by Discuz! X3.3