Toybrick

楼主: protossw512

【已解决!】RKNN转换Tensorflow官方的Deeplabv3后inference结果不

chuyee

中级会员

积分
352
楼主
发表于 2019-3-8 07:56:36 | 显示全部楼层
What's the inference time do you get for DeepLabv3? Is it possible for less than 1s?
回复

使用道具 举报

chuyee

中级会员

积分
352
沙发
发表于 2019-3-11 10:07:53 | 显示全部楼层
protossw512 发表于 2019-3-8 14:25
Deeplabv3+ is not actually as computation intensive as you think of. Depending on your network arc ...

That's amazing!
回复

使用道具 举报

chuyee

中级会员

积分
352
板凳
发表于 2019-3-11 11:50:26 | 显示全部楼层
protossw512 发表于 2019-3-8 14:25
Deeplabv3+ is not actually as computation intensive as you think of. Depending on your network arc ...

What does "mobilnetv2_dm0.5" stand for? I got only ~1.2s with GTX 1080 Ti with the demo code https://github.com/tensorflow/mo ... /deeplab_demo.ipynb, which uses model deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz (513x513, mobilenet_v2 coco dataset). I haven't ported it to rknn successfully yet. But do you think I can achieve 15FPS after the porting?
回复

使用道具 举报

chuyee

中级会员

积分
352
地板
发表于 2019-3-12 02:28:13 | 显示全部楼层

你的输入层设的是 MobilenetV2/Conv/Conv2D吧?那前面那些层都在CPU上处理吗?mobilenet_v2在rknn上到40fps都没问题。问题是加上前期和后期的处理后你处理一副513x513的图片在3399pro上需要多少秒呢?
回复

使用道具 举报

chuyee

中级会员

积分
352
5#
发表于 2019-3-12 16:55:57 | 显示全部楼层
chuyee 发表于 2019-3-11 11:50
What does "mobilnetv2_dm0.5" stand for? I got only ~1.2s with GTX 1080 Ti with the demo code https ...

I answer it myself, dm stands for depth multiplier. 0.5 means halve the number of channels used in each layer, which cuts down the number of computations by a factor of 4 and the number of learnable parameters by a factor 3. It is therefore much faster than the full model but also less accurate. On my GTX 1080Ti, the first frame is always ~1.2s, but the following ones reduce to ~0.015s. That's about 70FPS! So it makes sense for 3399pro to achieve 15FPS.
回复

使用道具 举报

chuyee

中级会员

积分
352
6#
发表于 2019-3-13 15:29:21 | 显示全部楼层
Here is my deeplabv3 result on 3399pro. I only get 3FPS. I'm using the rknn model converted from deeplabv3_mnv2_dm05_pascal.pb, input size 513x513. Seem there is still some gap to 15FPS. Did I miss anything?

inference time: 0.29389023780822754
inference time: 0.3319542407989502
inference time: 0.29692697525024414
inference time: 0.2921435832977295
inference time: 0.28955864906311035
inference time: 0.28310418128967285
inference time: 0.28433656692504883
inference time: 0.28536295890808105
inference time: 0.28377389907836914
inference time: 0.28389525413513184
--> Begin evaluate model performance
========================================================================
                               Performance                              
========================================================================
Total Time(us): 260892
FPS: 3.83
========================================================================
回复

使用道具 举报

chuyee

中级会员

积分
352
7#
发表于 2019-3-15 06:49:20 | 显示全部楼层
protossw512 发表于 2019-3-15 04:46
depends on what input / output node are you using, and whether you quantize your model.

Quantize is turned on. Input is "MobilenetV2/Conv/Conv2D" and "ArgMax" is output.. I assume that's the best can be achieved. Otherwise more stuff needs to be moved from NPU to CPU, which will make fps even worse. Are you sure your 15fps is achieved with 513x513 input size?
回复

使用道具 举报

chuyee

中级会员

积分
352
8#
发表于 2019-3-26 03:56:40 | 显示全部楼层
chuyee 发表于 2019-3-15 06:49
Quantize is turned on. Input is "MobilenetV2/Conv/Conv2D" and "ArgMax" is output.. I assume that' ...

@protossw512, see my bug report for rknn.perf_eval() on http://t.rock-chips.com/forum.ph ... &extra=page%3D1 . Could it be the reason you claimed 15FPS?
回复

使用道具 举报

chuyee

中级会员

积分
352
9#
发表于 2019-3-26 07:45:35 | 显示全部楼层
protossw512 发表于 2019-3-26 04:18
I am pretty sure, since after tested on Python with official mobilenet deeplabv3 I switched to C++  ...

Good point. If I replace 'ArgMax' with 'logits/semantic/BiasAdd', I can also get 12~15FPS (without postprocessing).

inference time: 10.533683061599731
inference time: 0.11503362655639648
inference time: 0.08590936660766602
inference time: 0.08321857452392578
inference time: 0.08301472663879395
inference time: 0.0832514762878418
inference time: 0.08804035186767578
inference time: 0.08095145225524902
inference time: 0.08228850364685059
inference time: 0.08656930923461914
done
--> Begin evaluate model performance
========================================================================
                               Performance                              
========================================================================
Total Time(us): 64915
FPS: 15.40
========================================================================

Question for RK folks, could you please check with your argmax implementation? Why does it take so long (263127 - 64915) ~= 200000 us?


回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

产品中心 购买渠道 开源社区 Wiki教程 资料下载 关于Toybrick


快速回复 返回顶部 返回列表