Toybrick

tensorflow pb转化rknn报错

wangshuyi

注册会员

积分
108
楼主
发表于 2019-11-9 21:41:59    查看: 24753|回复: 11 | [复制链接]    打印 | 只看该作者
C:\Users\Shuyi\PycharmProjects\convert_rknn\venv\Scripts\python.exe C:/Users/Shuyi/PycharmProjects/convert_rknn/convert_test.py

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/co ... 7-contrib-sunset.md
  * https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.

C:\Users\Shuyi\PycharmProjects\convert_rknn\venv\lib\site-packages\onnx_tf\common\__init__.py:87: UserWarning: FrontendHandler.get_outputs_names is deprecated. It will be removed in future release.. Use node.outputs instead.
  warnings.warn(message)
--> config model
done
--> Loading model
W Verbose file path is invalid, debug info will not dump to file.
D import clients finished
W:tensorflow:From C:\Users\Shuyi\PycharmProjects\convert_rknn\venv\lib\site-packages\rknn\api\rknn.py:65: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
I Current TF Model producer version 0 min consumer version 0 bad consumer version []
I short-cut batch_normalization_8/moving_meanut0 - model_1/batch_normalization_8/FusedBatchNorm:in3 skip model_1/batch_normalization_8/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_5/betaut0 - model_1/batch_normalization_5/FusedBatchNorm:in2 skip model_1/batch_normalization_5/ReadVariableOp_1
I short-cut batch_normalization_11/moving_varianceut0 - model_1/batch_normalization_11/FusedBatchNorm:in4 skip model_1/batch_normalization_11/FusedBatchNorm/ReadVariableOp_1
I short-cut batch_normalization_2/moving_meanut0 - model_1/batch_normalization_2/FusedBatchNorm:in3 skip model_1/batch_normalization_2/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_1/moving_meanut0 - model_1/batch_normalization_1/FusedBatchNorm:in3 skip model_1/batch_normalization_1/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_6/betaut0 - model_1/batch_normalization_6/FusedBatchNorm:in2 skip model_1/batch_normalization_6/ReadVariableOp_1
I short-cut batch_normalization_8/gammaut0 - model_1/batch_normalization_8/FusedBatchNorm:in1 skip model_1/batch_normalization_8/ReadVariableOp
I short-cut conv2d_2/kernelut0 - model_1/conv2d_2/Conv2D:in1 skip model_1/conv2d_2/Conv2D/ReadVariableOp
I short-cut batch_normalization_4/moving_meanut0 - model_1/batch_normalization_4/FusedBatchNorm:in3 skip model_1/batch_normalization_4/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_5/moving_varianceut0 - model_1/batch_normalization_5/FusedBatchNorm:in4 skip model_1/batch_normalization_5/FusedBatchNorm/ReadVariableOp_1
I short-cut batch_normalization_3/gamma:out0 - model_1/batch_normalization_3/FusedBatchNorm:in1 skip model_1/batch_normalization_3/ReadVariableOp
I short-cut batch_normalization_2/gamma:out0 - model_1/batch_normalization_2/FusedBatchNorm:in1 skip model_1/batch_normalization_2/ReadVariableOp
I short-cut batch_normalization_10/beta:out0 - model_1/batch_normalization_10/FusedBatchNorm:in2 skip model_1/batch_normalization_10/ReadVariableOp_1
I short-cut batch_normalization_10/moving_variance:out0 - model_1/batch_normalization_10/FusedBatchNorm:in4 skip model_1/batch_normalization_10/FusedBatchNorm/ReadVariableOp_1
I short-cut batch_normalization_3/beta:out0 - model_1/batch_normalization_3/FusedBatchNorm:in2 skip model_1/batch_normalization_3/ReadVariableOp_1
I short-cut batch_normalization_10/moving_mean:out0 - model_1/batch_normalization_10/FusedBatchNorm:in3 skip model_1/batch_normalization_10/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_8/moving_variance:out0 - model_1/batch_normalization_8/FusedBatchNorm:in4 skip model_1/batch_normalization_8/FusedBatchNorm/ReadVariableOp_1
I short-cut batch_normalization_11/gamma:out0 - model_1/batch_normalization_11/FusedBatchNorm:in1 skip model_1/batch_normalization_11/ReadVariableOp
I short-cut batch_normalization_5/moving_mean:out0 - model_1/batch_normalization_5/FusedBatchNorm:in3 skip model_1/batch_normalization_5/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_1/gamma:out0 - model_1/batch_normalization_1/FusedBatchNorm:in1 skip model_1/batch_normalization_1/ReadVariableOp
I short-cut conv2d_5/kernel:out0 - model_1/conv2d_5/Conv2D:in1 skip model_1/conv2d_5/Conv2D/ReadVariableOp
I short-cut batch_normalization_5/gamma:out0 - model_1/batch_normalization_5/FusedBatchNorm:in1 skip model_1/batch_normalization_5/ReadVariableOp
I short-cut batch_normalization_3/moving_variance:out0 - model_1/batch_normalization_3/FusedBatchNorm:in4 skip model_1/batch_normalization_3/FusedBatchNorm/ReadVariableOp_1
I short-cut conv2d_4/kernel:out0 - model_1/conv2d_4/Conv2D:in1 skip model_1/conv2d_4/Conv2D/ReadVariableOp
I short-cut conv2d_10/kernel:out0 - model_1/conv2d_10/Conv2D:in1 skip model_1/conv2d_10/Conv2D/ReadVariableOp
I short-cut batch_normalization_2/moving_variance:out0 - model_1/batch_normalization_2/FusedBatchNorm:in4 skip model_1/batch_normalization_2/FusedBatchNorm/ReadVariableOp_1
I short-cut conv2d_8/kernel:out0 - model_1/conv2d_8/Conv2D:in1 skip model_1/conv2d_8/Conv2D/ReadVariableOp
I short-cut batch_normalization_1/beta:out0 - model_1/batch_normalization_1/FusedBatchNorm:in2 skip model_1/batch_normalization_1/ReadVariableOp_1
I short-cut batch_normalization_11/moving_mean:out0 - model_1/batch_normalization_11/FusedBatchNorm:in3 skip model_1/batch_normalization_11/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_11/beta:out0 - model_1/batch_normalization_11/FusedBatchNorm:in2 skip model_1/batch_normalization_11/ReadVariableOp_1
I short-cut batch_normalization_3/moving_mean:out0 - model_1/batch_normalization_3/FusedBatchNorm:in3 skip model_1/batch_normalization_3/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_4/moving_variance:out0 - model_1/batch_normalization_4/FusedBatchNorm:in4 skip model_1/batch_normalization_4/FusedBatchNorm/ReadVariableOp_1
I short-cut conv2d_13/bias:out0 - model_1/conv2d_13/BiasAdd:in1 skip model_1/conv2d_13/BiasAdd/ReadVariableOp
I short-cut batch_normalization_2/beta:out0 - model_1/batch_normalization_2/FusedBatchNorm:in2 skip model_1/batch_normalization_2/ReadVariableOp_1
I short-cut batch_normalization_8/beta:out0 - model_1/batch_normalization_8/FusedBatchNorm:in2 skip model_1/batch_normalization_8/ReadVariableOp_1
I short-cut batch_normalization_4/beta:out0 - model_1/batch_normalization_4/FusedBatchNorm:in2 skip model_1/batch_normalization_4/ReadVariableOp_1
I short-cut conv2d_12/kernel:out0 - model_1/conv2d_12/Conv2D:in1 skip model_1/conv2d_12/Conv2D/ReadVariableOp
I short-cut conv2d_3/kernel:out0 - model_1/conv2d_3/Conv2D:in1 skip model_1/conv2d_3/Conv2D/ReadVariableOp
I short-cut conv2d_7/kernel:out0 - model_1/conv2d_7/Conv2D:in1 skip model_1/conv2d_7/Conv2D/ReadVariableOp
I short-cut conv2d_11/kernel:out0 - model_1/conv2d_11/Conv2D:in1 skip model_1/conv2d_11/Conv2D/ReadVariableOp
I short-cut batch_normalization_9/beta:out0 - model_1/batch_normalization_9/FusedBatchNorm:in2 skip model_1/batch_normalization_9/ReadVariableOp_1
I short-cut batch_normalization_7/gamma:out0 - model_1/batch_normalization_7/FusedBatchNorm:in1 skip model_1/batch_normalization_7/ReadVariableOp
I short-cut conv2d_10/bias:out0 - model_1/conv2d_10/BiasAdd:in1 skip model_1/conv2d_10/BiasAdd/ReadVariableOp
I short-cut conv2d_6/kernel:out0 - model_1/conv2d_6/Conv2D:in1 skip model_1/conv2d_6/Conv2D/ReadVariableOp
I short-cut conv2d_9/kernel:out0 - model_1/conv2d_9/Conv2D:in1 skip model_1/conv2d_9/Conv2D/ReadVariableOp
I short-cut batch_normalization_9/moving_mean:out0 - model_1/batch_normalization_9/FusedBatchNorm:in3 skip model_1/batch_normalization_9/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_6/moving_mean:out0 - model_1/batch_normalization_6/FusedBatchNorm:in3 skip model_1/batch_normalization_6/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_6/moving_variance:out0 - model_1/batch_normalization_6/FusedBatchNorm:in4 skip model_1/batch_normalization_6/FusedBatchNorm/ReadVariableOp_1
I short-cut batch_normalization_7/moving_mean:out0 - model_1/batch_normalization_7/FusedBatchNorm:in3 skip model_1/batch_normalization_7/FusedBatchNorm/ReadVariableOp
I short-cut batch_normalization_7/moving_variance:out0 - model_1/batch_normalization_7/FusedBatchNorm:in4 skip model_1/batch_normalization_7/FusedBatchNorm/ReadVariableOp_1
I short-cut batch_normalization_9/moving_variance:out0 - model_1/batch_normalization_9/FusedBatchNorm:in4 skip model_1/batch_normalization_9/FusedBatchNorm/ReadVariableOp_1
I short-cut conv2d_13/kernel:out0 - model_1/conv2d_13/Conv2D:in1 skip model_1/conv2d_13/Conv2D/ReadVariableOp
I short-cut batch_normalization_9/gamma:out0 - model_1/batch_normalization_9/FusedBatchNorm:in1 skip model_1/batch_normalization_9/ReadVariableOp
I short-cut conv2d_1/kernel:out0 - model_1/conv2d_1/Conv2D:in1 skip model_1/conv2d_1/Conv2D/ReadVariableOp
I short-cut batch_normalization_1/moving_variance:out0 - model_1/batch_normalization_1/FusedBatchNorm:in4 skip model_1/batch_normalization_1/FusedBatchNorm/ReadVariableOp_1
I short-cut batch_normalization_4/gamma:out0 - model_1/batch_normalization_4/FusedBatchNorm:in1 skip model_1/batch_normalization_4/ReadVariableOp
I short-cut batch_normalization_10/gamma:out0 - model_1/batch_normalization_10/FusedBatchNorm:in1 skip model_1/batch_normalization_10/ReadVariableOp
I short-cut batch_normalization_7/beta:out0 - model_1/batch_normalization_7/FusedBatchNorm:in2 skip model_1/batch_normalization_7/ReadVariableOp_1
I short-cut batch_normalization_6/gamma:out0 - model_1/batch_normalization_6/FusedBatchNorm:in1 skip model_1/batch_normalization_6/ReadVariableOp
I Have 1 tensors convert to const tensor
D Const tensors:
D ['model_1/up_sampling2d_1/mul:out0']
2019-11-09 21:32:50.559439: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
I build output layer attach_model_1/conv2d_10/BiasAdd:out0
I build output layer attach_model_1/conv2d_13/BiasAdd:out0
I build input layer image_input:out0
D Try match BiasAdd model_1/conv2d_10/BiasAdd
I Match convolution_biasadd [['model_1/conv2d_10/BiasAdd', 'model_1/conv2d_10/Conv2D', 'conv2d_10/bias', 'conv2d_10/kernel']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D Try match BiasAdd model_1/conv2d_13/BiasAdd
I Match convolution_biasadd [['model_1/conv2d_13/BiasAdd', 'model_1/conv2d_13/Conv2D', 'conv2d_13/bias', 'conv2d_13/kernel']] [['BiasAdd', 'Conv', 'C', 'C_1']] to [['convolution']]
D Try match LeakyRelu model_1/leaky_re_lu_9/LeakyRelu
E Catch exception when loading tensorflow model: ./test.h5_frozen.pb!
E Traceback (most recent call last):
E   File "rknn\api\rknn_base.py", line 190, in rknn.api.rknn_base.RKNNBase.load_tensorflow
E   File "rknn\base\RKNNlib\converter\convert_tf.py", line 639, in rknn.base.RKNNlib.converter.convert_tf.convert_tf.match_paragraph_and_param
E AttributeError: 'LeakyRelu' object has no attribute 'load_params_from_tf'

回复

使用道具 举报

louis

注册会员

积分
62
沙发
发表于 2019-11-10 15:35:03 | 只看该作者
把TensorFlow的版本退回1.8及以前就可以了
pip install tensorflow==1.8
回复

使用道具 举报

wangshuyi

注册会员

积分
108
板凳
 楼主| 发表于 2019-11-10 20:53:00 | 只看该作者
louis 发表于 2019-11-10 15:35
把TensorFlow的版本退回1.8及以前就可以了
pip install tensorflow==1.8

改了之后结果发现这样子
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'LeakyRelu' in binary running on LAPTOP-VATIJA0N. Make sure the Op and Kernel are registered in the binary running in this process.
回复

使用道具 举报

jefferyzhang

版主

积分
13502
地板
发表于 2019-11-10 21:50:27 | 只看该作者
1. 生成该pb格式模型的tf是什么版本?
2. 读取pb文件用的tf是什么版本,是否可用?
3. 转换rknn使用的tf是什么版本?

我们目前rknn支持1.14以及1.14以下版本的tf,但是生成pb和转换rknn的tf必须版本一致。
并且你需要先用tf去尝试读下这个pb文件,目前这个错很可能是tf无法读取出这个模型
回复

使用道具 举报

wangshuyi

注册会员

积分
108
5#
 楼主| 发表于 2019-11-11 14:24:17 | 只看该作者
jefferyzhang 发表于 2019-11-10 21:50
1. 生成该pb格式模型的tf是什么版本?
2. 读取pb文件用的tf是什么版本,是否可用?
3. 转换rknn使用的tf是 ...

按照您这边说的,训练时候用的模型时tensorflow 1.11.0,读取的时候也用1.11.0,转换的时候也用1.11.0,但是转化的时候用了keras 2.2.4,如果用tensorflow 自带的keras会出现转换失败的情况。现在又卡在了量化这里,请问您有什么建议呢
回复

使用道具 举报

wangshuyi

注册会员

积分
108
6#
 楼主| 发表于 2019-11-11 14:25:03 | 只看该作者
wangshuyi 发表于 2019-11-11 14:24
按照您这边说的,训练时候用的模型时tensorflow 1.11.0,读取的时候也用1.11.0,转换的时候也用1.11.0, ...

# Build model
    print('--> Building model')
    ret = rknn.build(do_quantization=True, dataset='./dataset.txt')  # 量化模型
    if ret != 0:
        print('Build model failed!')
        exit(ret)
    print('done')
回复

使用道具 举报

jefferyzhang

版主

积分
13502
7#
发表于 2019-11-11 14:45:19 | 只看该作者
wangshuyi 发表于 2019-11-11 14:25
# Build model
    print('--> Building model')
    ret = rknn.build(do_quantization=True, dataset=' ...

什么叫转换时候用keras2.2.4? keras是tf自带的 tf.keras,而不是单独下的keras包。
量化报错的log发出来。
也可以先关闭量化来看下rknn是否可以用,最后再考虑量化。
回复

使用道具 举报

wangshuyi

注册会员

积分
108
8#
 楼主| 发表于 2019-11-11 20:58:34 | 只看该作者
jefferyzhang 发表于 2019-11-11 14:45
什么叫转换时候用keras2.2.4? keras是tf自带的 tf.keras,而不是单独下的keras包。
量化报错的log发出来 ...

现在转化成功了,但是我不知道怎么去量化
回复

使用道具 举报

wangshuyi

注册会员

积分
108
9#
 楼主| 发表于 2019-11-11 21:01:40 | 只看该作者
def letterbox_image(image, size):

    iw, ih = image.size
    w, h = size
    scale = min(w / iw, h / ih)
    nw = int(iw * scale)
    nh = int(ih * scale)

    image = image.resize((nw, nh), Image.BICUBIC)
    new_image = Image.new('RGB', size, (128, 128, 128))
    new_image.paste(image, ((w - nw) // 2, (h - nh) // 2))
    return new_image
上面的结果是boxed_image
        image_data = np.array(boxed_image, dtype='float32')
        image_data /= 255.
        image_data = np.expand_dims(image_data, 0)
image_data是模型推理的输入,量化应该怎么进行呢,这个不是图片了
回复

使用道具 举报

wangshuyi

注册会员

积分
108
10#
 楼主| 发表于 2019-11-11 21:03:24 | 只看该作者
jefferyzhang 发表于 2019-11-11 14:45
什么叫转换时候用keras2.2.4? keras是tf自带的 tf.keras,而不是单独下的keras包。
量化报错的log发出来 ...

各个python依赖包的版本,关闭量化转化成功了
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

产品中心 购买渠道 开源社区 Wiki教程 资料下载 关于Toybrick


快速回复 返回顶部 返回列表