|
|
中文 / EN
|
After the development board Linux system installs the aarch64 version of TensorFlow, it can call the TensorFlow API for development. At the same time, because the development board is pre-installed with the standard Fedora system, it can also be developed based on various open source frameworks such as Caffe, which is no different from PC.
Python development only needs to call the API in the RKNN-Toolkit package to complete Python application development.
Rockchip provides a set of RKNN API SDKs, an acceleration solution based on the RK3399Pro Linux/Android neural network NPU hardware, which provides general acceleration support for AI-related applications developed with the RKNN API.
1、Linux Platform
Need to install the rknn-api development kit first:
sudo dnf install –y rknn-api
If the installation fails, go to the onedrive link to download: rknn_api_sdk
After the installation is successful, you can find the RKNN header file rknn_api.h and the library file librknn_api.so in the system directory. The application only needs to include the header file and link the dynamic library to develop related AI applications.
include the header file:
#include <rockchip/rknn_api.h>
link the dynamic library:
LDFLAGS = -lrknn_api
2、Android Platform
Go to the Android/RKNPUTools/rknn-api/Android/rknn_api directory. The RKNN API is defined in the header file of include/rknn_api.h. The dynamic library paths for the RKNN API are lib64/librknn_api.so and lib/librknn_api.so. The application only needs to include the header file and link the dynamic library to devel the JNI library of the relevant AI application. Currently, only JNI development methods are supported on Android.
For more introductions to the RKNN API SDK related API, please refer to the document "RK3399Pro_Linux&Android_RKNN_API_V*.pdf".