- Please modify ssd_mobilenet_v2.quantization.cfg!
- ==================================================================================================
- Modify method:
- 1. delete @FeatureExtractor/MobilenetV2/expanded_conv/depthwise/depthwise_227:weight and its value
- 2. delete @FeatureExtractor/MobilenetV2/expanded_conv/depthwise/depthwise_227:bias and its value
- 3. delete @FeatureExtractor/MobilenetV2/Conv/Relu6_228:out0 and its value
- ==================================================================================================
- Original quantization profile and modified quantization profile diff shows in file quantization_profile.diff
复制代码
我的问题是:为什么要修改这些,这些层作用是啥- W [rknn_inputs_set:1271] warning: inputs[0] expected input len is 270000, but actual len is 921600!
- ASSERT in NeuralNet.cpp.getNNInst(1471): (m_NnInst.outImageZeroPoint == 0x0) && "only UINT8 support tensor flow quantization\n"
- terminate called after throwing an instance of 'bool'
- Aborted (core dumped)
复制代码
我将仿照ssd_mobilenet_v2.quantization.cfg将我自己模型生成的quantization.cfg文件倒数第二层删除,step3.py可以跑通,但是结果全错。had_in 发表于 2020-5-12 11:39
求大佬帮助啊,在文档里看到这个,但是不修改会报错。。求大佬教。。。
...
- # SSD with Mobilenet v2, configured for egohands dataset.
- # This file was extracted modified from 'pipeline.config' in
- # http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
- model {
- ssd {
- num_classes: 2
- box_coder {
- faster_rcnn_box_coder {
- y_scale: 10.0
- x_scale: 10.0
- height_scale: 5.0
- width_scale: 5.0
- }
- }
- matcher {
- argmax_matcher {
- matched_threshold: 0.5
- unmatched_threshold: 0.5
- ignore_thresholds: false
- negatives_lower_than_unmatched: true
- force_match_for_each_row: true
- }
- }
- similarity_calculator {
- iou_similarity {
- }
- }
- anchor_generator {
- ssd_anchor_generator {
- num_layers: 6
- min_scale: 0.05
- max_scale: 0.95
- aspect_ratios: 1.0
- aspect_ratios: 2.0
- aspect_ratios: 0.5
- aspect_ratios: 3.0
- aspect_ratios: 0.3333
- }
- }
- image_resizer {
- fixed_shape_resizer {
- height: 300
- width: 300
- }
- }
- box_predictor {
- convolutional_box_predictor {
- min_depth: 0
- max_depth: 0
- num_layers_before_predictor: 0
- use_dropout: false
- dropout_keep_probability: 0.8
- kernel_size: 3
- box_code_size: 4
- apply_sigmoid_to_scores: false
- conv_hyperparams {
- activation: RELU_6
- regularizer {
- l2_regularizer {
- weight: 0.00004
- }
- }
- initializer {
- truncated_normal_initializer {
- stddev: 0.03
- mean: 0.0
- }
- }
- batch_norm {
- train: true
- scale: true
- center: true
- decay: 0.9997
- epsilon: 0.001
- }
- }
- }
- }
- feature_extractor {
- type: "ssd_mobilenet_v2"
- min_depth: 16
- depth_multiplier: 1.0
- conv_hyperparams {
- activation: RELU_6
- regularizer {
- l2_regularizer {
- weight: 4e-05
- }
- }
- initializer {
- truncated_normal_initializer {
- stddev: 0.03
- mean: 0.0
- }
- }
- batch_norm {
- train: true
- scale: true
- center: true
- decay: 0.9997
- epsilon: 0.001
- }
- }
- #batch_norm_trainable: true
- use_depthwise: true
- }
- loss {
- classification_loss {
- weighted_sigmoid {
- }
- }
- localization_loss {
- weighted_smooth_l1 {
- }
- }
- hard_example_miner {
- num_hard_examples: 3000
- iou_threshold: 0.99
- loss_type: CLASSIFICATION
- max_negatives_per_positive: 3
- min_negatives_per_image: 3
- }
- classification_weight: 1.0
- localization_weight: 1.0
- }
- normalize_loss_by_num_matches: true
- post_processing {
- batch_non_max_suppression {
- score_threshold: 1e-8
- iou_threshold: 0.6
- max_detections_per_class: 100
- max_total_detections: 100
- }
- score_converter: SIGMOID
- }
- }
- }
- train_config {
- batch_size: 24
- optimizer {
- rms_prop_optimizer {
- learning_rate {
- exponential_decay_learning_rate {
- initial_learning_rate: 0.004
- decay_steps: 1000
- decay_factor: 0.8
- }
- }
- momentum_optimizer_value: 0.9
- decay: 0.9
- epsilon: 1.0
- }
- }
- fine_tune_checkpoint: "ssd_mobilenet_v2_coco_2018_03_29/model.ckpt"
- fine_tune_checkpoint_type: "detection"
- num_steps: 20000
- data_augmentation_options {
- random_horizontal_flip {
- }
- }
- data_augmentation_options {
- ssd_random_crop {
- }
- }
- }
- train_input_reader {
- tf_record_input_reader {
- input_path: "data/pill_case.tfrecord"
- }
- label_map_path: "data/pill_case_label_map.txt"
- }
- eval_config {
- num_examples: 500
- max_evals: 10
- use_moving_averages: false
- }
- eval_input_reader {
- tf_record_input_reader {
- input_path: "data/pill_case.tfrecord"
- }
- label_map_path: "data/pill_case_label_map.txt"
- shuffle: false
- num_readers: 1
- }
复制代码
训练得到的模型我测试过,效果是正常的,但是转换完成之后,检测不出目标。在示例代码step1.py中有如下一行代码- rknn.config(channel_mean_value='127.5 127.5 127.5 128', reorder_channel='0 1 2', quantized_dtype='asymmetric_quantized-u8')
复制代码
在测试ssd_mobilenet_v2_coco_2018_03_29模型时,我这个参数不变,转换完成后能够正常预测,但是换成自己训练的模型,转换完成后无法检测目标,我检查了下concat_1节点的输出信息,发现预测的类别信息中都是背景。- rknn.config(channel_mean_value='0 0 0 1', reorder_channel='0 1 2', quantized_dtype='asymmetric_quantized-u8')
复制代码
但是转换完成后,结果依旧不正确。had_in 发表于 2020-5-13 10:59
大佬求教,我将tensorflow的ssd_mobilenet_v2_coco_2018_03_29模型转换成功并且也能够正常预测,但是基于这 ...
欢迎光临 Toybrick (https://t.rock-chips.com/) | Powered by Discuz! X3.3 |