Toybrick

如何转化quantize好的模型?

protossw512

中级会员

积分
252
楼主
发表于 2019-3-26 01:18:03    查看: 6981|回复: 6 | [复制链接]    打印 | 显示全部楼层
有一些tensorflow或者tflite的模型是已经quantize好的,请问这样的模型应该如何转化呢?如果转换的时候quantization=True是不是会重新quantize一下,这样进一步降低了精度?
回复

使用道具 举报

protossw512

中级会员

积分
252
沙发
 楼主| 发表于 2019-3-27 06:42:28 | 显示全部楼层
chuyee 发表于 2019-3-26 02:15
My understanding is, if the original model is already quantized to int8, we just need to pass do_qua ...

That sounds reasonable, but without inspect their source code(which they do not provide), I am not quite sure whether you are correct.


Any official folks to answer my question?
回复

使用道具 举报

protossw512

中级会员

积分
252
板凳
 楼主| 发表于 2019-3-28 02:07:19 | 显示全部楼层
chuyee 发表于 2019-3-27 07:46
BTW, have you by any chance tried do_quantization=False for deeplabv3 model? I got all 0 output fr ...

I have done that before in order to test if the output is correct compare to tensorflow model, and the results were valid, I didn't see any issue with floating point model.
回复

使用道具 举报

protossw512

中级会员

积分
252
地板
 楼主| 发表于 2019-3-29 02:01:07 | 显示全部楼层
chuyee 发表于 2019-3-28 14:00
I still get the same behavior, for both dynamic_fixed_point-16 and do_quantization=False. However  ...

I also encountered the issue you mentioned above.
I tried to get the output form BatchNorm with quantization=False, and the output turned out to be zero tensors. I think it is a bug.
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

产品中心 购买渠道 开源社区 Wiki教程 资料下载 关于Toybrick


快速回复 返回顶部 返回列表