- rknn.config(channel_mean_value='0 0 0 255', reorder_channel='0 1 2', batch_size=1)
复制代码
- rknn.load_tensorflow(tf_pb='./models/mobilenet_thin/graph_opt.pb', inputs=['image'], outputs=['Openpose/concat_stage7'], input_size_list=[[368, 432, 3]])
复制代码
- rknn.build(do_quantization=True, dataset='./data.txt',pre_compile=False)
复制代码
此处 quantization 的数据是我将一张图片resize到相应输入尺寸得到的。./data/person_432_368.jpg
D [rknn_init:749] Input Tensors:
D [printRKNNTensor:662] index=0 name= n_dims=4 dims=[1 3 368 432] n_elems=476928 size=476928 fmt=NCHW type=UINT8 qnt_type=AFFINE fl=0 zp=0 scale=0.003922
D [rknn_init:762] Output Tensors:
D [printRKNNTensor:662] index=0 name= n_dims=4 dims=[1 57 46 54] n_elems=141588 size=141588 fmt=NCHW type=UINT8 qnt_type=AFFINE fl=6 zp=6 scale=0.004017
done
- rknn = RKNN(verbose=False)
- rknn.load_rknn('./openpose_432_368.rknn')
- rknn.init_runtime()
- w, h = 432, 368
- frame = cv2.imread("person.jpg")
- image = cv2.resize(frame, (w, h), interpolation=cv2.INTER_AREA)
- frame_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- [output] = rknn.inference(inputs=[frame_rgb])
- output=output.reshape(1, 57, 46, 54)
- output = np.transpose(output, (0, 2, 3, 1))
复制代码
欢迎光临 Toybrick (https://t.rock-chips.com/) | Powered by Discuz! X3.3 |