Toybrick

deeplabv3+ tensorflow 模型移植讨论

chuyee

中级会员

积分
352
发表于 2019-3-11 16:45:21    查看: 3096|回复: 9 | [复制链接]    打印 | 显示全部楼层
本帖最后由 chuyee 于 2019-3-27 05:04 编辑

使用正确的input和output layer后模型已经可以成功转化成rknn。目前遇到精度和速度的问题,欢迎大家讨论。

精度:与原有模型相比,经rknn量化后的结果检测出来的边界模糊,见下图。

速度:
使用ArgMax作为输出层inference速度仅能达到3.8FPS。若使用BiasAdd层(省略掉两个bilinear resize和ArgMax)速度则可达到13.5FPS. 不知是不是rknn对这两个算子未优化好所致?

以下是原帖
----------
Hi,

I met below problem when converting tensorflow deeplabv3+ (mobilenet_v2 backend) model from http://download.tensorflow.org/m ... _2018_01_29.tar.gz. Please help to locate what's the problem. Also I saw someone in this forum had successfully converted deeplabv3 model. Please comment or let me know which model can be converted to rknn.

--> config model
done
--> Loading model
D import clients finished
I Current TF Model producer version 0 min consumer version 0 bad consumer version []
I Disconnect Assert_3/Assertut4096 and stack_3:in3
I Disconnect Assert/Assertut4096 and sub_2:in2
I Disconnect Assert_1/Assertut4096 and sub_3/y:in0
I Disconnect Assert_2/Assertut4096 and sub_5/y:in0
I short-cut MobilenetV2/expanded_conv_11/depthwise/Relu6ut0 - MobilenetV2/expanded_conv_11/project/Conv2D:in0 skip MobilenetV2/expanded_conv_11/depthwise_output
I short-cut MobilenetV2/expanded_conv_13/expand/BatchNorm/moving_varianceut0 - MobilenetV2/expanded_conv_13/expand/BatchNorm/FusedBatchNorm:in4 skip MobilenetV2/expanded_conv_13/expand/BatchNorm/moving_variance/read
I short-cut MobilenetV2/expanded_conv_12/addut0 - MobilenetV2/expanded_conv_13/input:in0 skip MobilenetV2/expanded_conv_12/output
...

I Try match FusedBatchNorm MobilenetV2/Conv/BatchNorm/FusedBatchNorm
I Match [['MobilenetV2/Conv/BatchNorm/FusedBatchNorm', 'MobilenetV2/Conv/BatchNorm/gamma', 'MobilenetV2/Conv/BatchNorm/beta', 'MobilenetV2/Conv/BatchNorm/moving_mean', 'MobilenetV2/Conv/BatchNorm/moving_variance']] [['FusedBatchNorm', 'C', 'C_1', 'C_2', 'C_3']] to [['batchnormalize']]
I Try match Conv2D MobilenetV2/Conv/Conv2D
I Match [['MobilenetV2/Conv/Conv2D', 'MobilenetV2/Conv/weights']] [['Conv', 'C']] to [['convolution']]
I Try match Sub sub_7
W Not match node sub_7 Sub
E Catch exception when loading tensorflow model: ./deeplabv3_mnv2_pascal.pb!
T Traceback (most recent call last):
T   File "rknn/api/rknn_base.py", line 191, in rknn.api.rknn_base.RKNNBase.load_tensorflow
T   File "rknn/base/rknnlib/converter/convert_tf.py", line 533, in rknn.base.rknnlib.converter.convert_tf.convert_tf.match_paragraph_and_param
T   File "rknn/base/rknnlib/converter/convert_tf.py", line 438, in rknn.base.rknnlib.converter.convert_tf.convert_tf._tf_push_ready_node
T TypeError: 'NoneType' object is not iterable
Load deeplabv3_mnv2_pascal failed!

I tried both "sub_7" and "MobilenetV2/Conv/Conv2D" as input and "ArgMax" as output.



回复

使用道具 举报

chuyee

中级会员

积分
352
 楼主| 发表于 2019-3-26 03:34:43 | 显示全部楼层
本帖最后由 chuyee 于 2019-3-26 03:35 编辑

Set "MobilenetV2/Conv/Conv2D" as input and "ArgMax" as output fixed the problem. However the rknn output is not as good as the original one. See below results.
Any ideas what might cause the problem?


Original image

Result from Tensorflow on PC

Result from Tensorflow on rknn

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
回复

使用道具 举报

elooon

注册会员

积分
139
发表于 2019-3-12 15:51:59 | 显示全部楼层
the link is unavailable
回复

使用道具 举报

chuyee

中级会员

积分
352
 楼主| 发表于 2019-3-12 16:17:43 | 显示全部楼层
回复

使用道具 举报

chuyee

中级会员

积分
352
 楼主| 发表于 2019-3-28 13:44:28 | 显示全部楼层
Found the problem. It's caused by rknn's implmentation of tensorflow ResizeBilinear() function. It's both inaccurate (as the picture illustrates) and slow (~200ms, yes it's ms, not us. See my other post for details). My workaround solution is to bypass the layer ResizeBilinear and after and then implement them in CPU. With parallelism between NPU and CPU, this is still faster than doing ResizeBilinear by NPU.
回复

使用道具 举报

metaphor22

新手上路

积分
22
发表于 2019-5-7 09:47:00 | 显示全部楼层
Hello, have you met the problem about MemoryError?When I did the quantization, this error just raised and followed by another information that told me RKNN model is None. I also used Deeplab V3+ with the backbone of MobileNet V2. Thanks for your reply in advance!
回复

使用道具 举报

raymond

新手上路

积分
41
发表于 2019-7-4 14:08:29 | 显示全部楼层
@chuyee 转换模型用的rknn-toolkit 是哪个版本的?
回复

使用道具 举报

zw1221

新手上路

积分
14
发表于 2020-3-2 09:23:49 | 显示全部楼层
您好,我这两天也在移植deeplabv3+tensorflow 的模型,用的官方的deeplabv3 mobilenetV2这个模型,模型可以成功转换成RKNN,可是在inference 的时候,就是一直卡死在 inference,30多分钟后都没有输出done,也没有报错,,请问大神这是什么问题造成的。?
回复

使用道具 举报

tomyhome

中级会员

积分
270
发表于 2020-3-16 10:14:04 | 显示全部楼层
zw1221 发表于 2020-3-2 09:23
您好,我这两天也在移植deeplabv3+tensorflow 的模型,用的官方的deeplabv3 mobilenetV2这个模型,模型可以 ...

你好,我现在连转换模型都还没跑通,在虚拟机中跑的转换模型,如果选择量化打开,就跑不了,如果量化关闭,在主板推理的时候就直接错误,不知道问题在哪里
回复

使用道具 举报

zhaomr

新手上路

积分
30
发表于 2020-11-6 13:23:17 | 显示全部楼层
zw1221 发表于 2020-3-2 09:23
您好,我这两天也在移植deeplabv3+tensorflow 的模型,用的官方的deeplabv3 mobilenetV2这个模型,模型可以 ...

你好,这个问题解决了吗? 是因为什么导致的?
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

产品中心 购买渠道 开源社区 Wiki教程 资料下载 关于Toybrick


快速回复 返回顶部 返回列表