Toybrick
标题: 首次模型转换疑问 [打印本页]
作者: chenli 时间: 2020-5-14 09:31
标题: 首次模型转换疑问
今天第一次转换模型,遇到蜜汁报错,我用地模型转换代码是手势识别的事例代码,只改了一句就是配置rgb那句
rknn.config(channel_mean_value='0 0 0 255', reorder_channel='0 1 2')
我把它注释了因为我用的输入是单通道数组(是不是不该这么用呢)
最后报错信息如下(话说是不是不支持自己写的模型阿):
Use`tf.compat.v1.graph_util.extract_sub_graph`
ECatch exception when loading tensorflow model:/home/toybrick/PycharmProjects/untitled/cws_AI/check/classes/models/nn_models.pb!ETraceback (most recent call last):E File "rknn/api/rknn_base.py", line 137, inrknn.api.rknn_base.RKNNBase.load_tensorflowE File "rknn/base/RKNNlib/converter/convert_tf.py", line107, in rknn.base.RKNNlib.converter.convert_tf.convert_tf.__init__E File "rknn/base/RKNNlib/converter/tensorflowloader.py",line 50, inrknn.base.RKNNlib.converter.tensorflowloader.TF_Graph_Preprocess.__init__E File "rknn/base/RKNNlib/converter/tensorflowloader.py",line 65, inrknn.base.RKNNlib.converter.tensorflowloader.TF_Graph_Preprocess.scan_and_optim_graphE File"/home/toybrick/.local/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py",line 324, in new_funcE return func(*args, **kwargs)E File"/home/toybrick/.local/lib/python3.7/site-packages/tensorflow/python/framework/graph_util_impl.py",line 182, in extract_sub_graphE _assert_nodes_are_present(name_to_node, dest_nodes)E File"/home/toybrick/.local/lib/python3.7/site-packages/tensorflow/python/framework/graph_util_impl.py",line 137, in _assert_nodes_are_presentE assert d in name_to_node, "%s is not in graph" % dEAssertionError: prediction is not in graphdone-->Building modelTraceback(most recent call last): File"rknn_transfer.py", line 24, in <module> rknn.build(do_quantization=True,dataset='./dataset.txt') File"/home/toybrick/.local/lib/python3.7/site-packages/rknn/api/rknn.py",line 148, in build inputs= self.rknn_base.net.get_input_layers()AttributeError:'NoneType' object has no attribute 'get_input_layers'
作者: jefferyzhang 时间: 2020-5-14 15:51
verbose=True打开再看log,
你发的这个log也太难看了。。
作者: chenli 时间: 2020-5-14 17:44
大佬这个问题我解决了,有个新的bug麻烦你帮我看一下,我特地把官网手势识别代码拉下来看了他的建模代码感觉和我的差不多为啥的我模型转换不了呢bug如下:
E Unknow layer "randomuniform"
E Catch exception when loading tensorflow model: /home/toybrick/PycharmProjects/untitled/cws_AI/check/classes/models/nn_models.pb!
E Traceback (most recent call last):
E File "rknn/api/rknn_base.py", line 190, in rknn.api.rknn_base.RKNNBase.load_tensorflow
E File "rknn/base/RKNNlib/converter/convert_tf.py", line 594, in rknn.base.RKNNlib.converter.convert_tf.convert_tf.match_paragraph_and_param
E File "rknn/base/RKNNlib/RKNNnet.py", line 171, in rknn.base.RKNNlib.RKNNnet.RKNNNet.new_layer
E File "/home/toybrick/.local/lib/python3.7/site-packages/rknn/base/RKNNlib/RKNNlog.py", line 327, in e
E raise ValueError(msg)
E ValueError: Unknow layer "randomuniform"
done
--> Building model
Traceback (most recent call last):
File "rknn_transfer.py", line 29, in <module>
rknn.build(do_quantization=False)
File "/home/toybrick/.local/lib/python3.7/site-packages/rknn/api/rknn.py", line 148, in build
inputs = self.rknn_base.net.get_input_layers()
AttributeError: 'NoneType' object has no attribute 'get_input_layers'
作者: chenli 时间: 2020-5-14 17:55
本帖最后由 chenli 于 2020-5-14 20:35 编辑
大佬建模代码如下请多指教:
def weight_variable(self,shape):
tf.set_random_seed(1)
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
"""偏置"""
def bias_variable(self, shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
"""卷积"""
def conv2d(self, inputs, weight):
# stride = [1,水平移动步长,竖直移动步长,1]
return tf.nn.conv2d(inputs, weight, strides=[1, 1, 1, 1], padding='SAME')
"""池化"""
def pool(self, image):
# stride = [1,水平移动步长,竖直移动步长,1]
return tf.nn.max_pool(image, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def training(self):
# 模型的默认参数,防止参数的服用出错,最好加再开头
tf.reset_default_graph()
# 申请将输入的变量用作后期传参
with tf.name_scope("Input"):
inputs = tf.placeholder(tf.float32, name='inputs')
inputs_reshape = tf.reshape(inputs, [-1, 128, 128, 1])
labels = tf.placeholder(tf.float32, name='labels')
with tf.name_scope("Layer1"):
c1_weight = tf.Variable(self.weight_variable([3, 3, 1, 4]),name='c1_weight')
c1_bias = tf.Variable(self.bias_variable([4]),name='c1_bias')
c1_relu = tf.nn.relu(self.conv2d(inputs_reshape, c1_weight) + c1_bias)
c1_pool = self.pool(c1_relu)
with tf.name_scope("Layer2"):
c2_weight = tf.Variable(self.weight_variable([3, 3, 4, 16]), name='c2_weight')
c2_bias = tf.Variable(self.bias_variable([16]),name='c2_bias')
c2_relu = tf.nn.relu(self.conv2d(c1_pool, c2_weight) + c2_bias)
c2_pool = self.pool(c2_relu)
with tf.name_scope("Layer4"):
c4_weight = tf.Variable(self.weight_variable([3, 3, 16, 32]),name='c4_weight')
c4_bias = tf.Variable(self.bias_variable([32]),name='c4_bias')
c4_relu = tf.nn.relu(self.conv2d(c2_pool, c4_weight) + c4_bias)
c4_pool = self.pool(c4_relu)
with tf.name_scope("Layer5"):
c5_weight = tf.Variable(self.weight_variable([3, 3, 32, 64]),name='c5_weight')
c5_bias = tf.Variable(self.bias_variable([64]),name='c5_bias')
c5_relu = tf.nn.relu(self.conv2d(c4_pool, c5_weight) + c5_bias)
c5_pool = self.pool(c5_relu)
with tf.name_scope("Layer6"):
c6_weight = tf.Variable(self.weight_variable([3, 3, 64, 128]), name='c5_weight')
c6_bias = tf.Variable(self.bias_variable([128]), name='c5_bias')
c6_relu = tf.nn.relu(self.conv2d(c5_pool, c6_weight) + c6_bias)
c6_pool = self.pool(c6_relu) # 最大池化后 8*8*64=4096
c6_pool_reshape = tf.reshape(c6_pool, [-1, 4* 4 * 128])
with tf.name_scope("Layer7"):
f4_weight = tf.Variable(self.weight_variable([8 * 8 * 32, 8*32]),name='f4_weight')
fn4_bias = tf.Variable(self.bias_variable([8*32]),name='fn4_bias')
fn4_relu = tf.nn.relu(tf.matmul(c6_pool_reshape, f4_weight) + fn4_bias)
fn4_drop = tf.nn.dropout(fn4_relu, keep_prob=self.drop_rate)
with tf.name_scope("Output"):
f7_weight = tf.Variable(self.weight_variable([8*32, self.output_size]), name='f7_weight')
f7_bias = tf.Variable(self.bias_variable([self.output_size]),name='f7_bias')
prediction = tf.add(tf.matmul(fn4_drop, f7_weight), f7_bias,name="prediction")
'''Loss function.'''
with tf.name_scope("Loss"):
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=labels))
# 梯度下降法:选用AdamOptimizer优化器
with tf.name_scope("Train_Step"):
train_step = tf.train.AdamOptimizer(self.learning_rate).minimize(loss)
with tf.name_scope("Accuracy"):
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1)), tf.float32))
self.data = np.asarray(self.data)
self.data1 = np.asarray(self.data1)
images1, label1 = self.get_Batch(self.data, self.data1, self.batch_size)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
'''Initializer varabiles and log defined in Tensorflow.'''
# 指定一个文件用来保存日志
init_op = tf.global_variables_initializer()
sess.run(init_op)
#summary_writer = tf.summary.FileWriter(self.LOG_DIR, sess.graph)
# 初始化Variable变量
'''Summary total logs in files.'''
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess, coord)
epoch = 0
try:
while not coord.should_stop():
# 将数据转化为tensor张量
data, label = sess.run([images1, label1])
epoch = epoch + 1
# 启动以下操作节点
sess.run(train_step, feed_dict={inputs: data, labels: label})
loss1 = sess.run(loss, feed_dict={inputs: data, labels: label})
accuracy1 = sess.run(accuracy, feed_dict={inputs: data, labels: label})
print('损失为' + str(loss1))
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def,
["Input/inputs", "Input/labels",
"Output/prediction"])
with tf.gfile.FastGFile(os.path.join(self.MODEL_SAVE_PATH, self.MODEL_NAME_pb), mode="wb") as f:
f.write(constant_graph.SerializeToString())
# 每隔50步打印一次当前的loss以及acc,同时记录log,写入writer
if epoch%50 ==0:
print('准确率' + str(accuracy1))
'''会将计算图中的变量取值以常量的形式保存。在保存模型文件的时候,我们只是导出
了GraphDef部分,GraphDef保存了从输入层到输出层的计算过程。在保存的时候,通过
convert_variables_to_constants函数来指定保存的节点名称而不是张量的名称,
“add:0”是张量的名称而"add"表示的是节点的名称。 '''
# 保存input域名下的x,y,和output域名下的predection为显示节点
# 保存最后一次网络参数
except tf.errors.OutOfRangeError:
print('Done training')
finally:
coord.request_stop()
coord.join(threads)
作者: jefferyzhang 时间: 2020-5-14 17:58
这不是写着么:E ValueError: Unknow layer "randomuniform"
这个op不认识也不支持啊。。。这个不是标准op吧
作者: chenli 时间: 2020-5-14 20:21
本帖最后由 chenli 于 2020-5-14 20:22 编辑
这个我知道,关键是我不知道是那个op不支持,我贴了建模代码,麻烦大佬指点以下,代码在你楼上
作者: jefferyzhang 时间: 2020-5-15 08:24
def weight_variable(self,shape):
tf.set_random_seed(1)
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
你这个哪里是个推理模型,所有weight都是随机生成的。
作者: chenli 时间: 2020-5-15 08:42
是要把weight写死成常数码?,这段我原来不是这么写的我一开始怀疑这个有错,这段是特地从你们官网给的手势识别的源码里copy过来的,为啥还会有问题,哪请问大佬该怎么该才能通过
欢迎光临 Toybrick (https://t.rock-chips.com/) |
Powered by Discuz! X3.3 |