Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation

dwSun 2018-02-06 17:25

In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters


原作者:Mark Sandler,Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen



dwSun 2018-02-07 11:58

网络结构看上去受到了densenet的影响,relu对于不同维度数据的损失是个非常有意义的研究。 论文里面有一些错误和细节没有交代的。 文中说的网络结构是19层,但是给的table里面实际上只有17层。另外stride为1的block中,很多block都是无法直接做element wise add,这个怎么处理的没有交代,目前见到的实现中,有一些直接没有处理,也有一些做了bottleneck之后channel缩放了。 总得来说,google出的论文干货还是不少的。