ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
Abstract
This study introduces a new neural network architecture, ShuffleNet V2, guided by direct performance metrics like speed, rather than just computation complexity, leading to state-of-the-art performance in speed-accuracy tradeoff.
Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 2
Collections including this paper 0
No Collection including this paper