Toybrick

如何转化quantize好的模型?

protossw512

中级会员

积分
252
楼主
发表于 2019-3-26 01:18:03    查看: 9322|回复: 6 | [复制链接]    打印 | 只看该作者
有一些tensorflow或者tflite的模型是已经quantize好的,请问这样的模型应该如何转化呢?如果转换的时候quantization=True是不是会重新quantize一下,这样进一步降低了精度?
回复

使用道具 举报

chuyee

中级会员

积分
352
沙发
发表于 2019-3-26 02:15:21 | 只看该作者
My understanding is, if the original model is already quantized to int8, we just need to pass do_quantization=False in rknn.build() to bypass the rknn quantization step. The model will be translated to rknn format (with all its supported OP implementations). The inference will be accurate (since no precision lost) and fast (hardware accelerated).
回复

使用道具 举报

protossw512

中级会员

积分
252
板凳
 楼主| 发表于 2019-3-27 06:42:28 | 只看该作者
chuyee 发表于 2019-3-26 02:15
My understanding is, if the original model is already quantized to int8, we just need to pass do_qua ...

That sounds reasonable, but without inspect their source code(which they do not provide), I am not quite sure whether you are correct.


Any official folks to answer my question?
回复

使用道具 举报

chuyee

中级会员

积分
352
地板
发表于 2019-3-27 07:46:58 | 只看该作者
protossw512 发表于 2019-3-27 06:42
That sounds reasonable, but without inspect their source code(which they do not provide), I am not ...

BTW, have you by any chance tried do_quantization=False for deeplabv3 model? I got all 0 output from ArgMax layer. The output is valid for do_quantization=True case with everything else remains the same. Could it be caused by float32 overflow? Does rknn support float64?
回复

使用道具 举报

protossw512

中级会员

积分
252
5#
 楼主| 发表于 2019-3-28 02:07:19 | 只看该作者
chuyee 发表于 2019-3-27 07:46
BTW, have you by any chance tried do_quantization=False for deeplabv3 model? I got all 0 output fr ...

I have done that before in order to test if the output is correct compare to tensorflow model, and the results were valid, I didn't see any issue with floating point model.
回复

使用道具 举报

chuyee

中级会员

积分
352
6#
发表于 2019-3-28 14:00:21 | 只看该作者
protossw512 发表于 2019-3-28 02:07
I have done that before in order to test if the output is correct compare to tensorflow model, and ...

I still get the same behavior, for both dynamic_fixed_point-16 and do_quantization=False. However both
asymmetric_quantized-u8 and dynamic_fixed_point-8 work correctly. I also identify the problem happens after the MobilenetV2/expanded_conv_3/expand/BatchNorm/FusedBatchNorm layer. Before that, there are some input needs to be load from weights file directly by 'read' operation (how is it handled by rknn?). I'm not sure it's related though.
回复

使用道具 举报

protossw512

中级会员

积分
252
7#
 楼主| 发表于 2019-3-29 02:01:07 | 只看该作者
chuyee 发表于 2019-3-28 14:00
I still get the same behavior, for both dynamic_fixed_point-16 and do_quantization=False. However  ...

I also encountered the issue you mentioned above.
I tried to get the output form BatchNorm with quantization=False, and the output turned out to be zero tensors. I think it is a bug.
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

产品中心 购买渠道 开源社区 Wiki教程 资料下载 关于Toybrick


快速回复 返回顶部 返回列表