Papers
arxiv:1807.11164

ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design

Published on Jul 30, 2018
Authors:
,
,
,

Abstract

This study introduces a new neural network architecture, ShuffleNet V2, guided by direct performance metrics like speed, rather than just computation complexity, leading to state-of-the-art performance in speed-accuracy tradeoff.

AI-generated summary

Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1807.11164 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.