|
Hello,
I am trying to convert a pytorch model to rknn. Even though I didnt see any documentation on it, I found out my model needs to be in torch jit format. So I converted my mode to torch jit and saved it using "trace" from torch.jit
Here is my model file in the correct format for rknn toolkit as far as I am aware.
https://mylifebox.com/shr/f45eed ... 93f&language=en
When I try to convert it using rknn toolkit visualition gui with 256x256 pictures in dataset.txt file, then 3x256x256 in the input node spot, I have received the following error.
"
-> Config model
-> Loading pytorch model
/home/kadir/anaconda3/envs/rknn/lib/python3.6/site-packages/onnx_tf/common/__init__.py:87: UserWarning: FrontendHandler.get_outputs_names is deprecated. It will be removed in future release .. Use node .outputs instead.
warnings.warn (message)
/home/kadir/Desktop/rknn-toolkit-v1.3.0/test_resimleri/net_trace.pt ********************
WARNING: Token 'COMMENT' defined, but not used
WARNING: There is 1 unused token
Syntax error in input! LexToken (NAMED_IDENTIFIER, 'str', 8,406)
D import clients finished
2020-04-30 11: 53: 06.641320: I tensorflow / core / platform / cpu_feature_guard.cc: 141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-04-30 11: 53: 06.733713: I tensorflow / stream_executor / cuda / cuda_gpu_executor.cc: 964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-30 11: 53: 06.734282: I tensorflow / core / common_runtime / gpu / gpu_device.cc: 1411] Found device 0 with properties:
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate (GHz): 1.62
pciBusID: 0000: 01: 00.0
totalMemory: 3.95GiB freeMemory: 3.27GiB
2020-04-30 11: 53: 06.734298: I tensorflow / core / common_runtime / gpu / gpu_device.cc: 1490] Adding visible gpu devices: 0
2020-04-30 11: 53: 07.405678: I tensorflow / core / common_runtime / gpu / gpu_device.cc: 971] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-30 11: 53: 07.405702: I tensorflow / core / common_runtime / gpu / gpu_device.cc: 977] 0
2020-04-30 11: 53: 07.405710: I tensorflow / core / common_runtime / gpu / gpu_device.cc: 990] 0: N
2020-04-30 11: 53: 07.406076: I tensorflow / core / common_runtime / gpu / gpu_device.cc: 1103] Created TensorFlow device (/ job: localhost / replica: 0 / task: 0 / device: GPU: 0 with 2981 MB memory)-> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000: 01: 00.0, compute capability: 6.1)
I Build net_trace complete.
D Optimizing network with force_1d_tensor, swapper, merge_layer, auto_fill_bn, resize_nearest_transformer, auto_fill_multiply, merge_avgpool_conv1x1, auto_fill_zero_bias, proposal_opt_import
D Optimizing network with conv2d_big_kernel_size_transform
-> Load pytorch model succeed!
-> Model process
W Genreate input meta fail, please check model.
W External input meta file "/tmp/tmp78uchgmi/net_trace_inputmeta.yml" is not exists.
Process Process-1: 2:
Traceback (most recent call last):
File "/home/kadir/anaconda3/envs/rknn/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run ()
File "/home/kadir/anaconda3/envs/rknn/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target (* self._args, ** self._kwargs)
File "/home/kadir/anaconda3/envs/rknn/lib/python3.6/site-packages/rknn/visualization/server/rknn_func.py", line 260, in start_quantization_convert
ret = rknn.hybrid_quantization_step1 (dataset = dataset)
File "/home/kadir/anaconda3/envs/rknn/lib/python3.6/site-packages/rknn/api/rknn.py", line 266, in hybrid_quantization_step1
ret = self.rknn_base.hybrid_quantization_step1 (dataset = dataset)
File "rknn / api / rknn_base.py", line 773, in rknn.api.rknn_base.RKNNBase.hybrid_quantization_step1
File "rknn / api / rknn_base.py", line 2249, in rknn.api.rknn_base.RKNNBase._generate_inputmeta
IndexError: list index out of range
"
Could you shed some light on why this error could be occuring? What are my possible solutions to avoid this type of error.
-By the way, the model I used doesn't have a default input size. It will work with different input sizes and its output size will be ([1,69,2],[1,69]), but I used 3x256x256 because rknn toolkit asks for input node size.
It's been a long question, I tried to explain it the best I could.
Regards, Kadir
|
|