Toybrick

楼主: protossw512

【已解决!】RKNN转换Tensorflow官方的Deeplabv3后inference结果不

chuyee

中级会员

积分
352
发表于 2019-3-12 16:55:57 | 显示全部楼层
chuyee 发表于 2019-3-11 11:50
What does "mobilnetv2_dm0.5" stand for? I got only ~1.2s with GTX 1080 Ti with the demo code https ...

I answer it myself, dm stands for depth multiplier. 0.5 means halve the number of channels used in each layer, which cuts down the number of computations by a factor of 4 and the number of learnable parameters by a factor 3. It is therefore much faster than the full model but also less accurate. On my GTX 1080Ti, the first frame is always ~1.2s, but the following ones reduce to ~0.015s. That's about 70FPS! So it makes sense for 3399pro to achieve 15FPS.
回复

使用道具 举报

protossw512

中级会员

积分
252
 楼主| 发表于 2019-3-13 06:08:24 | 显示全部楼层
chuyee 发表于 2019-3-12 16:55
I answer it myself, dm stands for depth multiplier. 0.5 means halve the number of channels used in ...

Yep, the first frame cannot represent general runtime on tensorflow framework. Even if you use dm=1.0 and add aspp and decoder, you can still run it with 10 fps with input size of 513x513, pretty amazing.
回复

使用道具 举报

chuyee

中级会员

积分
352
发表于 2019-3-13 15:29:21 | 显示全部楼层
Here is my deeplabv3 result on 3399pro. I only get 3FPS. I'm using the rknn model converted from deeplabv3_mnv2_dm05_pascal.pb, input size 513x513. Seem there is still some gap to 15FPS. Did I miss anything?

inference time: 0.29389023780822754
inference time: 0.3319542407989502
inference time: 0.29692697525024414
inference time: 0.2921435832977295
inference time: 0.28955864906311035
inference time: 0.28310418128967285
inference time: 0.28433656692504883
inference time: 0.28536295890808105
inference time: 0.28377389907836914
inference time: 0.28389525413513184
--> Begin evaluate model performance
========================================================================
                               Performance                              
========================================================================
Total Time(us): 260892
FPS: 3.83
========================================================================
回复

使用道具 举报

protossw512

中级会员

积分
252
 楼主| 发表于 2019-3-15 04:46:19 | 显示全部楼层
chuyee 发表于 2019-3-13 15:29
Here is my deeplabv3 result on 3399pro. I only get 3FPS. I'm using the rknn model converted from dee ...

depends on what input / output node are you using, and whether you quantize your model.
回复

使用道具 举报

chuyee

中级会员

积分
352
发表于 2019-3-15 06:49:20 | 显示全部楼层
protossw512 发表于 2019-3-15 04:46
depends on what input / output node are you using, and whether you quantize your model.

Quantize is turned on. Input is "MobilenetV2/Conv/Conv2D" and "ArgMax" is output.. I assume that's the best can be achieved. Otherwise more stuff needs to be moved from NPU to CPU, which will make fps even worse. Are you sure your 15fps is achieved with 513x513 input size?
回复

使用道具 举报

kitedream

中级会员

积分
284
发表于 2019-3-20 21:02:23 | 显示全部楼层
本帖最后由 kitedream 于 2019-5-16 10:32 编辑

厉害,厉害,我也在做转换,但是前后处理cpu耗时目前还比较大
回复

使用道具 举报

chuyee

中级会员

积分
352
发表于 2019-3-26 03:56:40 | 显示全部楼层
chuyee 发表于 2019-3-15 06:49
Quantize is turned on. Input is "MobilenetV2/Conv/Conv2D" and "ArgMax" is output.. I assume that' ...

@protossw512, see my bug report for rknn.perf_eval() on http://t.rock-chips.com/forum.ph ... &extra=page%3D1 . Could it be the reason you claimed 15FPS?
回复

使用道具 举报

protossw512

中级会员

积分
252
 楼主| 发表于 2019-3-26 04:18:28 | 显示全部楼层
chuyee 发表于 2019-3-26 03:56
@protossw512, see my bug report for rknn.perf_eval() on http://t.rock-chips.com/forum.php?mod=view ...

I am pretty sure, since after tested on Python with official mobilenet deeplabv3 I switched to C++ and use native C++ code evaluated the performance on my own deeplabv3.
In addition to official mobilenet version, I added decoder and aspp modules, which brings additional operations and input size of 400x400. I am able to run it with 9.x FPS.
I also find using node argmax is pretty slow, so I used biasAdd as output, and write my own implementation on C++ to get segmentation result.
回复

使用道具 举报

chuyee

中级会员

积分
352
发表于 2019-3-26 07:45:35 | 显示全部楼层
protossw512 发表于 2019-3-26 04:18
I am pretty sure, since after tested on Python with official mobilenet deeplabv3 I switched to C++  ...

Good point. If I replace 'ArgMax' with 'logits/semantic/BiasAdd', I can also get 12~15FPS (without postprocessing).

inference time: 10.533683061599731
inference time: 0.11503362655639648
inference time: 0.08590936660766602
inference time: 0.08321857452392578
inference time: 0.08301472663879395
inference time: 0.0832514762878418
inference time: 0.08804035186767578
inference time: 0.08095145225524902
inference time: 0.08228850364685059
inference time: 0.08656930923461914
done
--> Begin evaluate model performance
========================================================================
                               Performance                              
========================================================================
Total Time(us): 64915
FPS: 15.40
========================================================================

Question for RK folks, could you please check with your argmax implementation? Why does it take so long (263127 - 64915) ~= 200000 us?


回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

产品中心 购买渠道 开源社区 Wiki教程 资料下载 关于Toybrick


快速回复 返回顶部 返回列表