|
本帖最后由 nopattern 于 2019-7-24 15:56 编辑
在rk3399pro 上运行,我分别用time.time() 和eval_perf测量运行时间,发现两者相差很大。
time 测量是80ms, 而eval_perf 是43.5 ms,差距比较大。
D RKNNAPI: ==============================================
D RKNNAPI: RKNN VERSION:
D RKNNAPI: API: 0.9.5 (c12de8a build: 2019-05-06 20:18:39)
D RKNNAPI: DRV: 0.9.6 (c12de8a build: 2019-05-06 20:10:17)
D RKNNAPI: ==============================================
done
rknn only 117 ms
rknn only 78 ms
total 96 ms
--> Begin evaluate model performance
========================================================================
Performance
========================================================================
Total Time(us): 43505
FPS: 22.99
========================================================================
测试用的代码如下:
- img = img_preprocess(imgBGR)
- print('--> load rknn model')
- ret = rknn.load_rknn('./rknn/efficient_b2.rknn')
- if ret != 0:
- print('load rknn failed')
- exit(ret)
- print('done')
- print('--> Init runtime environment')
- ret = rknn.init_runtime()
- if ret != 0:
- print('Init runtime environment failed')
- exit(ret)
- print('done')
- ########## warm up
- ########## warm up
- # sdk_version = rknn.get_sdk_version()
- # print(sdk_version)
- # Inference
- start = time.time()
- #print('--> Running model ')
- outputs = rknn.inference(inputs=[img])
- #show_outputs(outputs)
- #print(outputs)
- end = time.time()
- print('rknn only %.f ms' % ((end - start) * 1000))
- # Inference
- start = time.time()
- #print('--> Running model ')
- outputs = rknn.inference(inputs=[img])
- # show_outputs(outputs)
- # print(outputs)
- end = time.time()
- print('rknn only %.f ms' % ((end - start) * 1000))
- out = get_multi_detect(anchors,outputs[2],outputs[0],outputs[1])
- end = time.time()
- print('total %.f ms' % ((end - start) * 1000))
- #print(out)
- #imshow_boxes_p8(imgBGR,out)
- #perf
- print('--> Begin evaluate model performance')
- perf_results = rknn.eval_perf(inputs=[img])
- print('done')
- rknn.release()
|
|