Toybrick
标题:
本人框架如下,保存为torchScript格式后,还是不能够在RK3399...
[打印本页]
作者:
18022443868
时间:
2020-5-7 14:34
标题:
本人框架如下,保存为torchScript格式后,还是不能够在RK3399...
hardnet(
(base): ModuleList(
(0): ConvLayer(
(conv): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(norm): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(16, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(24, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(32, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): HarDBlock(
(layers): ModuleList(
(0): ConvLayer(
(conv): Conv2d(48, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(10, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(58, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(18, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(10, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(76, 28, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(28, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
(5): ConvLayer(
(conv): Conv2d(48, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(6): AvgPool2d(kernel_size=2, stride=2, padding=0)
(7): HarDBlock(
(layers): ModuleList(
(0): ConvLayer(
(conv): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(80, 28, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(28, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(28, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(108, 46, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(46, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
(8): ConvLayer(
(conv): Conv2d(78, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(9): AvgPool2d(kernel_size=2, stride=2, padding=0)
(10): HarDBlock(
(layers): ModuleList(
(0): ConvLayer(
(conv): Conv2d(96, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(114, 30, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(30, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(30, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(144, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): ConvLayer(
(conv): Conv2d(52, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(5): ConvLayer(
(conv): Conv2d(70, 30, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(30, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(6): ConvLayer(
(conv): Conv2d(30, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(7): ConvLayer(
(conv): Conv2d(196, 88, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
(11): ConvLayer(
(conv): Conv2d(160, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(12): AvgPool2d(kernel_size=2, stride=2, padding=0)
(13): HarDBlock(
(layers): ModuleList(
(0): ConvLayer(
(conv): Conv2d(160, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(184, 40, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(40, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(224, 70, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(70, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): ConvLayer(
(conv): Conv2d(70, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(5): ConvLayer(
(conv): Conv2d(94, 40, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(6): ConvLayer(
(conv): Conv2d(40, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(7): ConvLayer(
(conv): Conv2d(294, 118, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(118, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
(14): ConvLayer(
(conv): Conv2d(214, 224, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(15): AvgPool2d(kernel_size=2, stride=2, padding=0)
(16): HarDBlock(
(layers): ModuleList(
(0): ConvLayer(
(conv): Conv2d(224, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(256, 54, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(54, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(54, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(310, 92, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(92, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): ConvLayer(
(conv): Conv2d(92, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(5): ConvLayer(
(conv): Conv2d(124, 54, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(54, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(6): ConvLayer(
(conv): Conv2d(54, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(7): ConvLayer(
(conv): Conv2d(402, 158, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(158, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
(17): ConvLayer(
(conv): Conv2d(286, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
(transUpBlocks): ModuleList(
(0): TransitionUp()
(1): TransitionUp()
(2): TransitionUp()
(3): TransitionUp()
)
(denseBlocksUp): ModuleList(
(0): HarDBlock(
(layers): ModuleList(
(0): ConvLayer(
(conv): Conv2d(267, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(291, 40, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(40, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(331, 70, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(70, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): ConvLayer(
(conv): Conv2d(70, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(5): ConvLayer(
(conv): Conv2d(94, 40, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(6): ConvLayer(
(conv): Conv2d(40, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(7): ConvLayer(
(conv): Conv2d(401, 118, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(118, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
(1): HarDBlock(
(layers): ModuleList(
(0): ConvLayer(
(conv): Conv2d(187, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(205, 30, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(30, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(30, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(235, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): ConvLayer(
(conv): Conv2d(52, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(5): ConvLayer(
(conv): Conv2d(70, 30, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(30, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(6): ConvLayer(
(conv): Conv2d(30, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(7): ConvLayer(
(conv): Conv2d(287, 88, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
(2): HarDBlock(
(layers): ModuleList(
(0): ConvLayer(
(conv): Conv2d(119, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(135, 28, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(28, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(28, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(163, 46, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(46, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
(3): HarDBlock(
(layers): ModuleList(
(0): ConvLayer(
(conv): Conv2d(63, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(10, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(73, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(18, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(10, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(91, 28, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm): BatchNorm2d(28, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
)
(conv1x1_up): ModuleList(
(0): ConvLayer(
(conv): Conv2d(534, 267, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm): BatchNorm2d(267, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): ConvLayer(
(conv): Conv2d(374, 187, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm): BatchNorm2d(187, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): ConvLayer(
(conv): Conv2d(238, 119, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm): BatchNorm2d(119, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): ConvLayer(
(conv): Conv2d(126, 63, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm): BatchNorm2d(63, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
(finalConv): Conv2d(48, 7, kernel_size=(1, 1), stride=(1, 1))
)
请问这里面有rk3399pro不支持的op吗?
作者:
jefferyzhang
时间:
2020-5-7 16:20
你这么发谁知道哪个支持哪个不支持。
把转换时候的verbose发出来
作者:
18022443868
时间:
2020-5-7 16:38
使用到了深度可分离卷积,目前这个op支持吗?
作者:
18022443868
时间:
2020-5-7 16:48
TracedModule[hardnet](
(base): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(4): TracedModule[HarDBlock](
(layers): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
)
(5): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(6): TracedModule[AvgPool2d]()
(7): TracedModule[HarDBlock](
(layers): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
)
(8): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(9): TracedModule[AvgPool2d]()
(10): TracedModule[HarDBlock](
(layers): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(4): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(5): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(6): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(7): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
)
(11): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(12): TracedModule[AvgPool2d]()
(13): TracedModule[HarDBlock](
(layers): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(4): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(5): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(6): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(7): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
)
(14): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(15): TracedModule[AvgPool2d]()
(16): TracedModule[HarDBlock](
(layers): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(4): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(5): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(6): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(7): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
)
(17): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
(transUpBlocks): TracedModule[ModuleList](
(0): TracedModule[TransitionUp]()
(1): TracedModule[TransitionUp]()
(2): TracedModule[TransitionUp]()
(3): TracedModule[TransitionUp]()
)
(denseBlocksUp): TracedModule[ModuleList](
(0): TracedModule[HarDBlock](
(layers): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(4): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(5): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(6): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(7): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
)
(1): TracedModule[HarDBlock](
(layers): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(4): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(5): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(6): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(7): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
)
(2): TracedModule[HarDBlock](
(layers): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
)
(3): TracedModule[HarDBlock](
(layers): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
)
)
(conv1x1_up): TracedModule[ModuleList](
(0): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(1): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(2): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
(3): TracedModule[Sequential](
(0): TracedModule[Conv2d]()
(1): TracedModule[ReLU]()
)
)
(finalConv): TracedModule[Conv2d]()
)
你说的是这些信息吗?
作者:
jefferyzhang
时间:
2020-5-7 17:01
18022443868 发表于 2020-5-7 16:38
使用到了深度可分离卷积,目前这个op支持吗?
1. dw卷积可以支持,但要改,参看我们troubleshoot文档有介绍
2. 你确定你要用dw卷积么,dw卷积存在的目的是为了在cpu上提高运算速度,但是牺牲了一定的精度,
我们NPU跑普通卷积速度远快于dw卷积,何必要多此一举牺牲精度还牺牲了速度。。
3. 你问的问题troubleshoot文档里均有解答
作者:
18022443868
时间:
2020-5-7 17:38
你说的这个文档是在什么地方?
作者:
18022443868
时间:
2020-5-7 17:53
RKNN\rknn-toolkit\doc 这个文件夹下面的文档吗?
回复居然不能上传图片,
作者:
18022443868
时间:
2020-5-7 19:03
倒残差op结构目前支持吗?
作者:
18022443868
时间:
2020-5-7 19:27
找到了你说的那个文件了,Rockchip_Trouble_Shooting_RKNN_Toolkit_V1.3_CN.PDF里面。
欢迎光临 Toybrick (https://t.rock-chips.com/)
Powered by Discuz! X3.3