Toybrick

标题: 人工智能开发系列(7) OPENPOSE开发与实现 [打印本页]

作者: 我是流氓我怕谁    时间: 2019-5-30 17:00
标题: 人工智能开发系列(7) OPENPOSE开发与实现
[attach]321[/attach]

本教程视频直播回看:
[attach]320[/attach]

1.快速上手

    1 准备3399pro开发板一块,USB摄像头一个,usb键盘,usb鼠标,hdmi显示屏,网线
    2 连接USB摄像头到3399pro开发板,连接鼠标键盘,连接显示器,连接网线,接通电源
    3 模型下载,建议在PC上做
  1. git clone https://github.com/spmallick/learnopencv.git
  2. cd OpenPose-Multi-Person/
  3. sudo chmod a+x getModels.sh
  4. ./getModels.sh
  5. python3 multi-person-openpose.py
复制代码
    若报cv2.dnn错误,这是opencv版本低的问题,需要升级opencv到3.4.1以上版本(不包含3.4.1),     若报" ValueError: not enough values to unpack (expected 3, got 2) "这是原脚本问题,需要将代码"_, contours, _ = cv2.findContours"改为"contours, _ = cv2.findContours"
     python3 multi-person-openpose.py为可选操作,报错不影响后续运行
    4 修改pose/coco/pose_deploy_linevec.prototxt文件,注销前5行代码,新增一个layer层,如下图所示
       [attach]458[/attach]
    5 下载并解压附件,将附件中的所有文件(5个文件)拷贝到到OpenPose-Multi-Person目录下 [attach]319[/attach]
    6 python3 rknn_transfer.py进行模型转换,模型转化比较耗内存建议先设置swap大于2G
    7 拷贝OpenPose-Multi-Person目录到开发板,以下操作在开发板运行
    8 安装rknn-toolkit及其依赖库,按照wiki教程安装环境
    9 安装gstreamer包
  1. sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
  2. sudo dnf install gstreamer1-libav
复制代码
   10 python3 test_rnetCam.py(单人)或者python3 multi-person-openpose_rknn-cam.py(多人)

2.概述

VGG-19用于生成特征值。每个stage对应一个身体部位,branch1用于生产Confidence Maps,branch2用于生产PAF(Part Affinity maps )
Input: [1x3x368x368](输入的数据format是nchw,bgr格式)
Output: [1x57x46x46](输出数据前面部分是身体部位的Confidence Maps ,后面部分是PAF)

3.代码解析

├── pose_deploy_linevec_pre_compile.rknn.rknn        //转换之后的rknn模型
├── dataset.txt                                                             //量化的数据集
├── p1_368_368.jpg                                                     //用于量化的图片
├── multi-person-openpose_rknn-cam.py                  //使用rknn进行推理,以及后处理(多人)
├── test_rnetCam.py                                                  //使用rknn进行推理,以及后处理(单人)
└── rknn_transfer.py                                                 //用于将caffe模型转换为rknn模型

rknn_transfer.py

  1. from rknn.api import RKNN
  2. import cv2
  3. import time
  4. import numpy as np

  5. if __name__ == '__main__':

  6.     # Create RKNN object
  7.     rknn = RKNN()
  8.    
  9.     # pre-process config
  10.     print('--> config model')
  11.     # 配置模型输入,用于NPU对数据输入的预处理
  12.     # channel_mean_value='0 0 0 255',那么模型推理时,将会对RGB数据做如下转换
  13.     # (R - 0)/255, (G - 0)/255, (B - 0)/255。推理时,RKNN模型会自动做均值和归一化处理
  14.     # reorder_channel=’0 1 2’用于指定是否调整RBG顺序,设置成0 1 2即按输入的RGB顺序不做调整
  15.     # reorder_channel=’2 1 0’表示交换0和2通道,輸入调整为BGR   
  16.     rknn.config(channel_mean_value='0 0 0 255', reorder_channel='2 1 0')
  17.     print('done')

  18.     # Load tensorflow model
  19.     print('--> Loading model')
  20.     ret = rknn.load_caffe(model='./pose/coco/pose_deploy_linevec.prototxt', proto='caffe',
  21.                             blobs='./pose/coco/pose_iter_440000.caffemodel')
  22.     if ret != 0:
  23.         print('Load model failed!')
  24.         exit(ret)
  25.     print('done')

  26.     # Build model
  27.     print('--> Building model')
  28.         # do_quantization=True进行量化
  29.         # 量化会减小模型的体积和提升运算速度,但是会有精度的丢失
  30.         # 开启预编译,提高load速度
  31.     ret = rknn.build(do_quantization=True, dataset='./dataset.txt', pre_compile=True)
  32.     if ret != 0:
  33.         print('Build model failed!')
  34.         exit(ret)
  35.     print('done')
  36.     # Export rknn model
  37.     print('--> Export RKNN model')
  38.     ret = rknn.export_rknn('./pose_deploy_linevec_pre_compile.rknn')
  39.     if ret != 0:
  40.         print('Export model failed!')
  41.         exit(ret)
  42.     print('done')

  43.     rknn.release()


复制代码

multi-person-openpose_rknn-cam.py
  1. import cv2
  2. import time
  3. import numpy as np
  4. from random import randint
  5. from rknn.api import RKNN

  6. rknn = RKNN()
  7. '''
  8. protoFile = "pose/coco/pose_deploy_linevec.prototxt"
  9. weightsFile = "pose/coco/pose_iter_440000.caffemodel"
  10. '''
  11. nPoints = 18
  12. # COCO Output Format
  13. keypointsMapping = ['Nose', 'Neck', 'R-Sho', 'R-Elb', 'R-Wr', 'L-Sho', 'L-Elb', 'L-Wr', 'R-Hip', 'R-Knee', 'R-Ank', 'L-Hip', 'L-Knee', 'L-Ank', 'R-Eye', 'L-Eye', 'R-Ear', 'L-Ear']

  14. POSE_PAIRS = [[1,2], [1,5], [2,3], [3,4], [5,6], [6,7],
  15.               [1,8], [8,9], [9,10], [1,11], [11,12], [12,13],
  16.               [1,0], [0,14], [14,16], [0,15], [15,17],
  17.               [2,16], [5,17] ]

  18. # index of pafs correspoding to the POSE_PAIRS
  19. # e.g for POSE_PAIR(1,2), the PAFs are located at indices (31,32) of output, Similarly, (1,5) -> (39,40) and so on.
  20. mapIdx = [[31,32], [39,40], [33,34], [35,36], [41,42], [43,44],
  21.           [19,20], [21,22], [23,24], [25,26], [27,28], [29,30],
  22.           [47,48], [49,50], [53,54], [51,52], [55,56],
  23.           [37,38], [45,46]]

  24. colors = [ [0,100,255], [0,100,255], [0,255,255], [0,100,255], [0,255,255], [0,100,255],
  25.          [0,255,0], [255,200,100], [255,0,255], [0,255,0], [255,200,100], [255,0,255],
  26.          [0,0,255], [255,0,0], [200,200,0], [255,0,0], [200,200,0], [0,0,0]]


  27. def getKeypoints(probMap, threshold=0.1):

  28.     mapSmooth = cv2.GaussianBlur(probMap,(3,3),0,0)

  29.     mapMask = np.uint8(mapSmooth>threshold)
  30.     #np.set_printoptions(threshold=np.inf)
  31.     keypoints = []

  32.     #find the blobs
  33.     _, contours, hierarchy = cv2.findContours(mapMask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

  34.     #for each blob find the maxima
  35.     #对于每个关键点,对confidence map 应用一个阀值(本例采用0.1),生成二值图。
  36.     #首先找出每个关键点区域的全部轮廓。
  37.     #生成这个区域的mask。
  38.     #通过用probMap乘以这个mask,提取该区域的probMap。
  39.     #找到这个区域的本地极大值。要对每个即关键点区域进行处理。
  40.     #本地极大值对应的坐标就是关键点坐标
  41.     for cnt in contours:
  42.         blobMask = np.zeros(mapMask.shape)
  43.         blobMask = cv2.fillConvexPoly(blobMask, cnt, 1)
  44.         maskedProbMap = mapSmooth * blobMask
  45.         _, maxVal, _, maxLoc = cv2.minMaxLoc(maskedProbMap)
  46.         keypoints.append(maxLoc + (probMap[maxLoc[1], maxLoc[0]],))

  47.     return keypoints


  48. # Find valid connections between the different joints of a all persons present
  49. def getValidPairs(output):
  50.     valid_pairs = []
  51.     invalid_pairs = []
  52.     n_interp_samples = 10
  53.     paf_score_th = 0.1
  54.     conf_th = 0.7
  55.     # loop for every POSE_PAIR
  56.     for k in range(len(mapIdx)):
  57.         # A->B constitute a limb
  58.         pafA = output[0, mapIdx[k][0], :, :]
  59.         pafB = output[0, mapIdx[k][1], :, :]
  60.         pafA = cv2.resize(pafA, (frameWidth, frameHeight))
  61.         pafB = cv2.resize(pafB, (frameWidth, frameHeight))

  62.         # candA: (124, 365, 0.17102814, 43)
  63.         #                               detected_keypoints keypoint_id
  64.         # Find the keypoints for the first and second limb
  65.         #把连接对上的关键点提取出来,相同的关键点放一起。把关键点对分开地方到两个列表上
  66.         #(列表名为candA和candB)。在列表candA上的每一个点都会和列表candB上某些点连接
  67.         candA = detected_keypoints[POSE_PAIRS[k][0]]
  68.         candB = detected_keypoints[POSE_PAIRS[k][1]]

  69.         nA = len(candA)
  70.         nB = len(candB)

  71.         # If keypoints for the joint-pair is detected
  72.         # check every joint in candA with every joint in candB
  73.         # Calculate the distance vector between the two joints
  74.         # Find the PAF values at a set of interpolated points between the joints
  75.         # Use the above formula to compute a score to mark the connection valid

  76.         if( nA != 0 and nB != 0):
  77.             valid_pair = np.zeros((0,3))
  78.             for i in range(nA):
  79.                 max_j=-1
  80.                 maxScore = -1
  81.                 found = 0
  82.                 for j in range(nB):
  83.                     # Find d_ij
  84.                     d_ij = np.subtract(candB[j][:2], candA[i][:2])
  85.                     norm = np.linalg.norm(d_ij)
  86.                     if norm:
  87.                         d_ij = d_ij / norm
  88.                     else:
  89.                         continue
  90.                     # Find p(u)
  91.                     interp_coord = list(zip(np.linspace(candA[i][0], candB[j][0], num=n_interp_samples),
  92.                                             np.linspace(candA[i][1], candB[j][1], num=n_interp_samples)))
  93.                     # Find L(p(u))
  94.                     paf_interp = []
  95.                     for k in range(len(interp_coord)):
  96.                         paf_interp.append([pafA[int(round(interp_coord[k][1])), int(round(interp_coord[k][0]))],
  97.                                            pafB[int(round(interp_coord[k][1])), int(round(interp_coord[k][0]))] ])
  98.                     # Find E
  99.                     paf_scores = np.dot(paf_interp, d_ij)
  100.                     avg_paf_score = sum(paf_scores)/len(paf_scores)

  101.                     # Check if the connection is valid
  102.                     # If the fraction of interpolated vectors aligned with PAF is higher then threshold -> Valid Pair
  103.                     if ( len(np.where(paf_scores > paf_score_th)[0]) / n_interp_samples ) > conf_th :
  104.                         if avg_paf_score > maxScore:
  105.                             max_j = j
  106.                             maxScore = avg_paf_score
  107.                             found = 1
  108.                 # Append the connection to the list
  109.                 if found:
  110.                     #   detected_keypoints keypoint_id
  111.                     valid_pair = np.append(valid_pair, [[candA[i][3], candB[max_j][3], maxScore]], axis=0)
  112.             # Append the detected connections to the global list
  113.             valid_pairs.append(valid_pair)
  114.         else: # If no keypoints are detected
  115.             invalid_pairs.append(k)
  116.             valid_pairs.append([])
  117.     return valid_pairs, invalid_pairs



  118. # This function creates a list of keypoints belonging to each person
  119. # For each detected valid pair, it assigns the joint(s) to a person
  120. def getPersonwiseKeypoints(valid_pairs, invalid_pairs):
  121.     # the last number in each row is the overall score

  122.     #我们首先创建空列表,用来存放每个人的关键点(即关键部位)
  123.     personwiseKeypoints = -1 * np.ones((0, 19))
  124.     for k in range(len(mapIdx)):
  125.         if k not in invalid_pairs:
  126.             partAs = valid_pairs[k][:,0]
  127.             partBs = valid_pairs[k][:,1]
  128.             indexA, indexB = np.array(POSE_PAIRS[k])

  129.             for i in range(len(valid_pairs[k])):
  130.                 found = 0
  131.                 person_idx = -1
  132.                 #遍历每一个连接对,检查连接对中的partA是否已经存在于任意列表之中
  133.                 for j in range(len(personwiseKeypoints)):
  134.                     if personwiseKeypoints[j][indexA] == partAs[i]:
  135.                         person_idx = j
  136.                         found = 1
  137.                         break

  138.                 #如果存在,那么意味着这关键点属于当前列表,同时连接对中的partB也同样属于这个人体
  139.                 #把连接对中的partB增加到partA所在的列表。
  140.                 if found:
  141.                     personwiseKeypoints[person_idx][indexB] = partBs[i]
  142.                     personwiseKeypoints[person_idx][-1] += keypoints_list[partBs[i].astype(int), 2] + valid_pairs[k][i][2]

  143.                 # if find no partA in the subset, create a new subset
  144.                 #如果partA不存在于任意列表,那么说明这一对属于一个还没建立列表的人体,于是需要新建一个新列表。
  145.                 elif not found and k < 17:
  146.                     row = -1 * np.ones(19)
  147.                     row[indexA] = partAs[i]
  148.                     row[indexB] = partBs[i]
  149.                     # add the keypoint_scores for the two keypoints and the paf_score
  150.                     row[-1] = sum(keypoints_list[valid_pairs[k][i,:2].astype(int), 2]) + valid_pairs[k][i][2]
  151.                     personwiseKeypoints = np.vstack([personwiseKeypoints, row])
  152.     return personwiseKeypoints


  153. inWidth = 368
  154. inHeight = 368

  155. rknn.load_rknn('./pose_deploy_linevec_pre_compile.rknn')
  156. ret = rknn.init_runtime()
  157. if ret != 0:
  158.     print('Init runtime environment failed')
  159.     exit(ret)
  160. print('done')

  161. cap = cv2.VideoCapture(0)

  162. hasFrame, frame = cap.read()

  163. while cv2.waitKey(1) < 0:
  164.     t = time.time()
  165.     hasFrame, frame = cap.read()

  166.     # resize输入图像为368x368
  167.     frame = cv2.resize(frame, (inWidth, inHeight), interpolation=cv2.INTER_CUBIC)
  168.     if not hasFrame:
  169.         cv2.waitKey()
  170.         break
  171.     frameWidth = frame.shape[1]
  172.     frameHeight = frame.shape[0]

  173.     # input mode转为’nchw’
  174.     frame_input = np.transpose(frame, [2, 0, 1])
  175.     t = time.time()
  176.     [output] = rknn.inference(inputs=[frame_input], data_format="nchw")
  177.     print("time:", time.time()-t)
  178.    
  179.     # rknn输出的数组转为1x57x46x46的矩阵
  180.     output = output.reshape(1, 57, 46, 46)
  181.    
  182.     detected_keypoints = []
  183.     keypoints_list = np.zeros((0,3))
  184.     keypoint_id = 0
  185.     threshold = 0.1

  186.     for part in range(nPoints):
  187.         probMap = output[0,part,:,:]
  188.         probMap = cv2.resize(probMap, (frame.shape[1], frame.shape[0]))
  189.         keypoints = getKeypoints(probMap, threshold)
  190.         keypoints_with_id = []
  191.         for i in range(len(keypoints)):
  192.             keypoints_with_id.append(keypoints[i] + (keypoint_id,))
  193.             keypoints_list = np.vstack([keypoints_list, keypoints[i]])
  194.             keypoint_id += 1

  195.         detected_keypoints.append(keypoints_with_id)


  196.     frameClone = frame.copy()
  197.    
  198.     #for i in range(nPoints):
  199.     #   for j in range(len(detected_keypoints[i])):
  200.     #        cv2.circle(frameClone, detected_keypoints[i][j][0:2], 5, colors[i], -1, cv2.LINE_AA)
  201.     #cv2.imshow("Keypoints",frameClone)
  202.    
  203.    
  204.     valid_pairs, invalid_pairs = getValidPairs(output)
  205.     personwiseKeypoints = getPersonwiseKeypoints(valid_pairs, invalid_pairs)
  206.     #连接各个人体关键点
  207.     for i in range(17):
  208.         for n in range(len(personwiseKeypoints)):
  209.             index = personwiseKeypoints[n][np.array(POSE_PAIRS[i])]
  210.             if -1 in index:
  211.                 continue
  212.             B = np.int32(keypoints_list[index.astype(int), 0])
  213.             A = np.int32(keypoints_list[index.astype(int), 1])
  214.             cv2.line(frameClone, (B[0], A[0]), (B[1], A[1]), colors[i], 3, cv2.LINE_AA)


  215.     cv2.imshow("Detected Pose" , frameClone)
  216.    
  217.     #cv2.waitKey(0)

  218. rknn.release()
复制代码





作者: you_big_father    时间: 2019-6-5 13:36
您好,我转换模型的时候出现如下错误,请问如何解决
--> Building model
E generate vdata error, could not find vdata file, vdes file
E Catch exception when building RKNN model!
T Traceback (most recent call last):
T   File "rknn/api/rknn_base.py", line 476, in rknn.api.rknn_base.RKNNBase.build
T   File "rknn/api/rknn_base.py", line 405, in rknn.api.rknn_base.RKNNBase._build
T   File "rknn/base/ovxconfiggenerator.py", line 171, in rknn.base.ovxconfiggenerator.generate_vx_config_from_files
T   File "rknn/base/ovxconfiggenerator.py", line 91, in rknn.base.ovxconfiggenerator.generate_vdata
T   File "rknn/base/RKNNlib/app/code_generator/casegenerator.py", line 363, in rknn.base.RKNNlib.app.code_generator.casegenerator.CaseGenerator.generate
T   File "rknn/base/RKNNlib/app/code_generator/casegenerator.py", line 329, in rknn.base.RKNNlib.app.code_generator.casegenerator.CaseGenerator._gen_special_case
T   File "rknn/base/RKNNlib/app/code_generator/casegenerator.py", line 274, in rknn.base.RKNNlib.app.code_generator.casegenerator.CaseGenerator._gen_vdata_file
T   File "rknn/base/RKNNlib/RKNNlog.py", line 105, in rknn.base.RKNNlib.RKNNlog.RKNNLog.e
T ValueError: generate vdata error, could not find vdata file, vdes file
Build model failed!

作者: bill    时间: 2019-6-11 06:47
您好,我转换模型的时候出现如下错误,请问如何解决,docker 环境下执行的转换
root@89ee00158270:/home/OpenPose-Multi-Person# python3 rknn_transfer.py
--> config model
done
--> Loading model
E Deprecated caffe input usage, please change it to input layer.
E Catch exception when loading caffe model: ./pose/coco/pose_deploy_linevec.prototxt!
T Traceback (most recent call last):
T   File "rknn/api/rknn_base.py", line 281, in rknn.api.rknn_base.RKNNBase.load_caffe
T   File "rknn/base/RKNNlib/converter/caffeloader.py", line 977, in rknn.base.RKNNlib.converter.caffeloader.CaffeLoader.load
T   File "rknn/base/RKNNlib/converter/caffeloader.py", line 746, in rknn.base.RKNNlib.converter.caffeloader.CaffeLoader.parse_net_param
T   File "rknn/base/RKNNlib/RKNNlog.py", line 105, in rknn.base.RKNNlib.RKNNlog.RKNNLog.e
T ValueError: Deprecated caffe input usage, please change it to input layer.
Load model failed!



作者: bill    时间: 2019-6-11 08:23
bill 发表于 2019-6-11 06:47
您好,我转换模型的时候出现如下错误,请问如何解决,docker 环境下执行的转换
root@89ee00158270:/home/Op ...

我的已解决了,pose_deploy_linevec.prototxt 模型文件版本低的问题
作者: bill    时间: 2019-6-12 07:26
在docker 环境执行了这例子。但一个姿态都没有识别出来,output 打印基本全是【0,。。。。0】。
docker 环境为跑这个例子,源码主要改动点:
a.模型转换中将 pre_compile=False ,原来True 转换报错说,不支持pre_compile
b. 将从摄像图输入,改为输入图片或视屏
c. output = output.reshape(1, 57, 46, 46) 修改为 output = output.reshape(1, 57, 28, 28) 因为output size 是44688 不能reshape 为 (1, 57, 46, 46)
作者: bill    时间: 2019-6-12 23:00
早上这个问题也解决了,是由于修改prototxt文件参数,dim 228 要修改368
作者: 1074292224    时间: 2019-6-28 10:44
请问有没有修改后的prototxt文件,我的build过程报错:,望大神帮助!
done
--> Building model
E Catch exception when building RKNN model!
T Traceback (most recent call last):
T   File "rknn/api/rknn_base.py", line 515, in rknn.api.rknn_base.RKNNBase.build
T   File "rknn/api/rknn_base.py", line 439, in rknn.api.rknn_base.RKNNBase._build
T   File "rknn/base/ovxconfiggenerator.py", line 197, in rknn.base.ovxconfiggenerator.generate_vx_config_from_files
T   File "rknn/base/RKNNlib/app/exporter/ovxlib_case/casegenerator.py", line 382, in rknn.base.RKNNlib.app.exporter.ovxlib_case.casegenerator.CaseGenerator.generate
T   File "rknn/base/RKNNlib/app/exporter/ovxlib_case/casegenerator.py", line 339, in rknn.base.RKNNlib.app.exporter.ovxlib_case.casegenerator.CaseGenerator._gen_unify_case
T   File "rknn/base/RKNNlib/app/exporter/ovxlib_case/casegenerator.py", line 234, in rknn.base.RKNNlib.app.exporter.ovxlib_case.casegenerator.CaseGenerator._generate_ovxlib_case
T   File "rknn/base/RKNNlib/app/exporter/ovxlib_case/vxnetgenerator.py", line 419, in rknn.base.RKNNlib.app.exporter.ovxlib_case.vxnetgenerator.VXNetGenerator.generate
T   File "rknn/base/RKNNlib/app/exporter/ovxlib_case/vxnetgenerator.py", line 652, in rknn.base.RKNNlib.app.exporter.ovxlib_case.vxnetgenerator.VXNetGenerator._generate_node_connections
T   File "rknn/base/RKNNlib/app/exporter/ovxlib_case/vxnetgenerator.py", line 474, in rknn.base.RKNNlib.app.exporter.ovxlib_case.vxnetgenerator.VXNetGenerator._gen_first
T KeyError: 'output_1'
Build model failed!
作者: 1074292224    时间: 2019-6-30 12:12
我在转换时遇到这个错误,是因为什么原因呢,希望大神可以指导!
--> config model
done
--> Loading model
E Catch exception when loading caffe model: ../pose/coco/pose_deploy_linevec.prototxt!
T Traceback (most recent call last):
T   File "rknn/api/rknn_base.py", line 288, in rknn.api.rknn_base.RKNNBase.load_caffe
T   File "rknn/base/RKNNlib/converter/caffeloader.py", line 993, in rknn.base.RKNNlib.converter.caffeloader.CaffeLoader.load_blobs
T   File "/usr/local/lib/python3.6/site-packages/google/protobuf/message.py", line 187, in ParseFromString
T     return self.MergeFromString(serialized)
T   File "/usr/local/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1124, in MergeFromString
T     if self._InternalParse(serialized, 0, length) != length:
T   File "/usr/local/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1189, in InternalParse
T     pos = field_decoder(buffer, new_pos, end, self, field_dict)
T   File "/usr/local/lib/python3.6/site-packages/google/protobuf/internal/decoder.py", line 700, in DecodeRepeatedField
T     raise _DecodeError('Truncated message.')
T google.protobuf.message.DecodeError: Truncated message.
Load model failed!

这是我的pose_deploy_linevec.prototxt文件内容:刚开始的那些需不需要保留?
layer {
  name: "image"
  type: "Input"
  top: "image"
  input_param{
    shape {
          dim: 1
          dim: 3
          dim: 368
          dim: 368
    }
  }
}
作者: Guanghai.Wan    时间: 2019-7-25 18:36
有谁转换成功了的,还麻烦发一个转换好的模型文件给我了
作者: 我是流氓我怕谁    时间: 2019-7-25 19:35
#input: "image"
#input_dim: 1
#input_dim: 3
#input_dim: 1 # This value will be defined at runtime
#input_dim: 1 # This value will be defined at runtime
layer {
  name: "image"
  type: "Input"
  top: "image"
  input_param {
    shape {
      dim: 1
      dim: 3
      dim: 368
      dim: 368
    }
  }
}

prototxt文件修改方式。



作者: kepurSong    时间: 2019-9-3 12:06
bill 发表于 2019-6-11 08:23
我的已解决了,pose_deploy_linevec.prototxt 模型文件版本低的问题

你好,你的这个模型文件版本低的问题是如何解决的?

作者: kepurSong    时间: 2019-9-3 15:20
我是流氓我怕谁 发表于 2019-7-25 19:35
#input: "image"
#input_dim: 1
#input_dim: 3

你好,为什么我的0.9.9版本,每次导出rknn时,pre_compile=True就会报错。
作者: gwill    时间: 2019-10-27 16:09
转换的过程碰见这个无法加载模型请问该怎么办?
[gwill@localhost OpenPose-Multi-Person]$ ./getModels.sh
--2019-10-27 15:43:12--  http://posefs1.perception.cs.cmu ... r_440000.caffemodel
正在解析主机 posefs1.perception.cs.cmu.edu (posefs1.perception.cs.cmu.edu)... 128.2.176.37
正在连接 posefs1.perception.cs.cmu.edu (posefs1.perception.cs.cmu.edu)|128.2.176.37|:80... 失败:Connection refused。
[gwill@localhost OpenPose-Multi-Person]$ python3 rknn_transfer.py
/usr/lib64/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
--> config model
done
--> Loading model
E Blobs file ./pose/coco/pose_iter_440000.caffemodel not exists!
Load model failed!
作者: gwill    时间: 2019-10-27 20:02
模型放到板子后输入python3 multi-person-openpose_rknn-cam.py之后显示一个done后就一直卡在那边了,请问有谁遇到这种情况吗?要怎么解决呀?板子都开始烫手了也没出结果!!
作者: gwill    时间: 2019-10-28 11:28
hisping 发表于 2019-10-28 09:09
我在rknn-toolkit v1.2上验证是ok的,你是这个版本吗?确认有板子上有插摄像头,有安装opencv吗 ...

我的rknn-toolkit是0.99版本的因为我看视频里面是可以的,有安装opencv,有安装摄像头,昨晚多人的一直卡住,早上试验了单人的也一直卡住,下好像是卡在了rknn.inference这里。frameinput = np.transpose(frame, [2, 0, 1])
        t = time.time()
        [output] = rknn.inference(inputs=[frameinput], data_format="nchw")
卡在这里一直动不了
作者: gwill    时间: 2019-10-28 17:31
hisping 发表于 2019-10-28 09:09
我在rknn-toolkit v1.2上验证是ok的,你是这个版本吗?确认有板子上有插摄像头,有安装opencv吗 ...

今天将环境和包都重新装了一遍,opencv是3.4.1,python是3.6.8,有usb摄像头,rknn_toolkit也更新到了1.2.0,运行后显示using device with adb mode to init runtime ,but npu_transfer_proxy is running ,it may cause exception,please terminate npu_transefer_proxy first。
我把npu_transfer_proxy kill之后还是不行,显示:
E connect to device failure(-1)
E catch exception when init runtime!
T file“rknn/api/rknn_bash.py”,line 769,in rknn/api/rknn_bash.RKNNBase.init_runtime
T  ................
T ...................
T Exceptiomn Init runtime envireoment failed
作者: gwill    时间: 2019-10-29 21:51
本帖最后由 gwill 于 2019-10-29 22:12 编辑
hisping 发表于 2019-10-29 09:08
你是在rk3399pro上fedora系统跑这个模型的吗?你可以试试重新烧写镜像,或者http://t.rock-chips.com/wik ...

您好,我是在rk3399pro这块板子上fedora跑的,我用我实验室另一块3399pro 更新了所有东西还是这样,用rknn_toolkit 0.9.9跑的时候就是一直卡在rknn.inference这里,更新到1.2.0后就显示:E RKNNAPI:rknn_init,recv(MsgLoadAck)fail,-9(error_pipe)!=368!
E catch exception when init runtime!
T file“rknn/api/rknn_bash.py”,line 769,in rknn/api/rknn_bash.RKNNBase.init_runtime
T  ................
T ...................
T Exceptiomn Init runtime envireoment failed,
能麻烦您帮忙看一下这到底是什么原因吗?有没有可能是因为我在转换模型的时候出了问题,能否麻烦您把转换成功的模型发一个给我呢?我的邮箱是gwill_huang@163.com!十分感谢您!!

作者: liuwenhua    时间: 2019-12-9 18:22
bill 发表于 2019-6-12 23:00
早上这个问题也解决了,是由于修改prototxt文件参数,dim 228 要修改368

你有做body_25的姿态估计?
作者: chansy    时间: 2020-3-25 17:36
有没有debian下的范例
作者: wzp    时间: 2020-4-21 15:14
转换模型时遇到加载模型失败怎么办?
[toybrick@toybrick rknn_openpose]$ python3 rknn_transfer.py
/usr/lib64/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
/home/toybrick/.local/lib/python3.6/site-packages/onnx_tf/common/__init__.py:87: UserWarning: FrontendHandler.get_outputs_names is deprecated. It will be removed in future release.. Use node.outputs instead.
  warnings.warn(message)
--> config model
done
--> Loading model
E Blobs file ./pose/coco/pose_iter_440000.caffemodel not exists!
Load model failed!

我cd到./pose/coco/目录 里面并没有pose_iter_440000.caffemodel这个文件啊,有哪位大神跑通这个案例了?指导一下,谢谢!

作者: wzp    时间: 2020-4-21 15:38
wzp 发表于 2020-4-21 15:14
转换模型时遇到加载模型失败怎么办?
[toybrick@toybrick rknn_openpose]$ python3 rknn_transfer.py
/usr ...

知道问题所在了:
使用代码包内提供的getModels.sh下载模型的权重文件。请注意配置proto文件已经存在于文件夹中。
在命令行中,cd到下载到的文件夹内,执行以下代码:
sudo chmod a+x getModels.sh
./getModels.sh
检查文件夹中,是否下载好了二进制模型文件(.caffemodel后缀的文件)。如果无法运行上述脚本,可以直接点这里http://posefs1.perception.cs.cmu ... r_440000.caffemodel下载模型。下载完成后,要放到“pose/coco/”文件夹内。

作者: wzp    时间: 2020-4-21 17:18
wzp 发表于 2020-4-21 15:38
知道问题所在了:
使用代码包内提供的getModels.sh下载模型的权重文件。请注意配置proto文件已经存在于文 ...

解决了上面的问题,由遇到新问题了,模型转换失败:
warnings.warn(message)
--> config model
done
--> Loading model
done
--> Building model
W The RKNN Model generated can not run on simulator when pre_compile is True.
E pre_compile is not supproted on aarch64 platform.
Build model failed!
怎么解决
作者: wzp    时间: 2020-4-22 09:03
wzp 发表于 2020-4-21 17:18
解决了上面的问题,由遇到新问题了,模型转换失败:
warnings.warn(message)
--> config model

pre_compile=ture删掉就可以了
作者: golo    时间: 2020-5-5 16:15
(venv) t450@t450:~/RK1808_stick/learnopencv/OpenPose-Multi-Person$ python test_rnetCam.py
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
--> Init runtime environment
E Pre compile model can not run on simulator
E Catch exception when init runtime!
T Traceback (most recent call last):
T   File "rknn/api/rknn_base.py", line 664, in rknn.api.rknn_base.RKNNBase.init_runtime
T   File "rknn/api/rknn_runtime.py", line 244, in rknn.api.rknn_runtime.RKNNRuntime.build_graph
T Exception: RKNN init failed. Wrong platform: simulator
Init runtime environment failed

作者: golo    时间: 2020-5-5 21:05
新问题
python test_rnetCam.py
W:tensorflow:From /home/t450/RK1808_stick/venv/lib/python3.5/site-packages/onnx_tf/handlers/backend/ceil.py:10: The name tf.ceil is deprecated. Please use tf.math.ceil instead.

W:tensorflow:From /home/t450/RK1808_stick/venv/lib/python3.5/site-packages/onnx_tf/handlers/backend/depth_to_space.py:12: The name tf.depth_to_space is deprecated. Please use tf.compat.v1.depth_to_space instead.

W:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/co ... 7-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

W:tensorflow:From /home/t450/RK1808_stick/venv/lib/python3.5/site-packages/onnx_tf/handlers/backend/log.py:10: The name tf.log is deprecated. Please use tf.math.log instead.

W:tensorflow:From /home/t450/RK1808_stick/venv/lib/python3.5/site-packages/onnx_tf/handlers/backend/random_normal.py:9: The name tf.random_normal is deprecated. Please use tf.random.normal instead.

W:tensorflow:From /home/t450/RK1808_stick/venv/lib/python3.5/site-packages/onnx_tf/handlers/backend/random_uniform.py:9: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

W:tensorflow:From /home/t450/RK1808_stick/venv/lib/python3.5/site-packages/onnx_tf/handlers/backend/upsample.py:13: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.

/home/t450/RK1808_stick/venv/lib/python3.5/site-packages/onnx_tf/common/__init__.py:87: UserWarning: FrontendHandler.get_outputs_names is deprecated. It will be removed in future release.. Use node.outputs instead.
  warnings.warn(message)
--> Init runtime environment
E Pre compile model can not run on simulator
E Catch exception when init runtime!
E Traceback (most recent call last):
E   File "rknn/api/rknn_base.py", line 788, in rknn.api.rknn_base.RKNNBase.init_runtime
E   File "rknn/api/rknn_runtime.py", line 270, in rknn.api.rknn_runtime.RKNNRuntime.build_graph
E Exception: RKNN init failed. Wrong platform: simulator
Init runtime environment failed

作者: Aiden    时间: 2020-5-12 18:39
@我是流氓我怕谁, 请问 openpose 的 mobilenet v1 模型在 Toybrick RK3399Pro 上的 FPS 预计可以达到多少?
作者: SongJ    时间: 2020-5-14 16:05
./getModels.sh 之后速度奇慢无比怎么办
作者: qing    时间: 2021-3-10 16:49
您好,请问这个网址现在无效了是吗
作者: qing    时间: 2021-3-10 17:55
好像网址上的文件不开放了
作者: spcwo    时间: 2021-7-12 20:31
你好,我按照提供的代码跑通了。但是推理一帧单人的图片 需要 200ms,是这么慢吗。怎么优化能快点
作者: 虚无灵幻    时间: 2021-11-11 16:17
要如何重新训练一个模型




欢迎光临 Toybrick (https://t.rock-chips.com/) Powered by Discuz! X3.3