标题: eval_perf BUG? (different FPS reported with perf_debug=True and False) [打印本页] 作者: chuyee 时间: 2019-3-26 03:53 标题: eval_perf BUG? (different FPS reported with perf_debug=True and False) Hi,
I found an interesting problem for rknn.eval_perf() while evaluating rknn with deeplabv3 (See [1] and [2] for more info on that topic).
After rknn.inference(), I use below code to print the performance evaluation result.
perf_results = rknn.eval_perf(inputs=[img])
However I got 3x difference (3.8 vs 13.08) with the result if I turn on/off perf_debug in rknn.init_runtime(). Below is the result, nothing else changed except the init_runtime() call.
ret = rknn.init_runtime(perf_debug=False, eval_mem=False)
========================================================================
Performance
========================================================================
Total Time(us): 263127
FPS: 3.80
========================================================================
ret = rknn.init_runtime(perf_debug=True, eval_mem=False)
The real time rknn.inference() takes is about 0.3s. So perf_debug=False seems to give more closer result. But that's a black box to me. RK people please take a look at your internal implementation what might cause the problem. Thanks!
作者: chuyee 时间: 2019-3-28 13:48
Looks like this is caused by ResizeBilinear function. When perf_debug=True, the layer time is not counted. However when perf_debug=False, the time spent is counted in total time...