Papers
arxiv:1603.07285

A guide to convolution arithmetic for deep learning

Published on Mar 23, 2016
Authors:
,

Abstract

A guide aids deep learning practitioners in understanding and manipulating convolutional neural network architectures, detailing relationships between layer properties like input and output shapes, zero padding, strides, and convolution and transposed convolution.

AI-generated summary

We introduce a guide to help deep learning practitioners understand and manipulate convolutional neural network architectures. The guide clarifies the relationship between various properties (input shape, kernel shape, zero padding, strides and output shape) of convolutional, pooling and transposed convolutional layers, as well as the relationship between convolutional and transposed convolutional layers. Relationships are derived for various cases, and are illustrated in order to make them intuitive.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1603.07285 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1603.07285 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1603.07285 in a Space README.md to link it from this page.

Collections including this paper 1