chuyee 发表于 2019-3-8 07:56
What's the inference time do you get for DeepLabv3? Is it possible for less than 1s?
Deeplabv3+ is not actually as computation intensive as you think of. Depending on your network architecture, you can run mobilnetv2_dm0.5 up to 15 fps with input size of 513x513.
protossw512 发表于 2019-3-8 14:25
Deeplabv3+ is not actually as computation intensive as you think of. Depending on your network arc ...
What does "mobilnetv2_dm0.5" stand for? I got only ~1.2s with GTX 1080 Ti with the demo code https://github.com/tensorflow/mo ... /deeplab_demo.ipynb, which uses model deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz (513x513, mobilenet_v2 coco dataset). I haven't ported it to rknn successfully yet. But do you think I can achieve 15FPS after the porting?