text
stringlengths 0
4.99k
|
---|
<tensorflow.python.keras.callbacks.History at 0x15f3240d0> |
For more information, make sure to read the Functional API guide.Working with RNNs |
Authors: Scott Zhu, Francois Chollet |
Date created: 2019/07/08 |
Last modified: 2020/04/14 |
Description: Complete guide to using & customizing RNN layers. |
View in Colab • GitHub source |
Introduction |
Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language. |
Schematically, a RNN layer uses a for loop to iterate over the timesteps of a sequence, while maintaining an internal state that encodes information about the timesteps it has seen so far. |
The Keras RNN API is designed with a focus on: |
Ease of use: the built-in keras.layers.RNN, keras.layers.LSTM, keras.layers.GRU layers enable you to quickly build recurrent models without having to make difficult configuration choices. |
Ease of customization: You can also define your own RNN cell layer (the inner part of the for loop) with custom behavior, and use it with the generic keras.layers.RNN layer (the for loop itself). This allows you to quickly prototype different research ideas in a flexible way with minimal code. |
Setup |
import numpy as np |
import tensorflow as tf |
from tensorflow import keras |
from tensorflow.keras import layers |
Built-in RNN layers: a simple example |
There are three built-in RNN layers in Keras: |
keras.layers.SimpleRNN, a fully-connected RNN where the output from previous timestep is to be fed to next timestep. |
keras.layers.GRU, first proposed in Cho et al., 2014. |
keras.layers.LSTM, first proposed in Hochreiter & Schmidhuber, 1997. |
In early 2015, Keras had the first reusable open-source Python implementations of LSTM and GRU. |
Here is a simple example of a Sequential model that processes sequences of integers, embeds each integer into a 64-dimensional vector, then processes the sequence of vectors using a LSTM layer. |
model = keras.Sequential() |
# Add an Embedding layer expecting input vocab of size 1000, and |
# output embedding dimension of size 64. |
model.add(layers.Embedding(input_dim=1000, output_dim=64)) |
# Add a LSTM layer with 128 internal units. |
model.add(layers.LSTM(128)) |
# Add a Dense layer with 10 units. |
model.add(layers.Dense(10)) |
model.summary() |
Model: "sequential" |
_________________________________________________________________ |
Layer (type) Output Shape Param # |
================================================================= |
embedding (Embedding) (None, None, 64) 64000 |
_________________________________________________________________ |
lstm (LSTM) (None, 128) 98816 |
_________________________________________________________________ |
dense (Dense) (None, 10) 1290 |
================================================================= |
Total params: 164,106 |
Trainable params: 164,106 |
Non-trainable params: 0 |
_________________________________________________________________ |
Built-in RNNs support a number of useful features: |
Recurrent dropout, via the dropout and recurrent_dropout arguments |
Ability to process an input sequence in reverse, via the go_backwards argument |
Loop unrolling (which can lead to a large speedup when processing short sequences on CPU), via the unroll argument |
...and more. |
For more information, see the RNN API documentation. |
Outputs and states |
By default, the output of a RNN layer contains a single vector per sample. This vector is the RNN cell output corresponding to the last timestep, containing information about the entire input sequence. The shape of this output is (batch_size, units) where units corresponds to the units argument passed to the layer's constructor. |
A RNN layer can also return the entire sequence of outputs for each sample (one vector per timestep per sample), if you set return_sequences=True. The shape of this output is (batch_size, timesteps, units). |
model = keras.Sequential() |
model.add(layers.Embedding(input_dim=1000, output_dim=64)) |
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256) |
model.add(layers.GRU(256, return_sequences=True)) |
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128) |
model.add(layers.SimpleRNN(128)) |
model.add(layers.Dense(10)) |
model.summary() |
Model: "sequential_1" |
_________________________________________________________________ |
Layer (type) Output Shape Param # |
================================================================= |
embedding_1 (Embedding) (None, None, 64) 64000 |
_________________________________________________________________ |
gru (GRU) (None, None, 256) 247296 |
_________________________________________________________________ |
simple_rnn (SimpleRNN) (None, 128) 49280 |
_________________________________________________________________ |
dense_1 (Dense) (None, 10) 1290 |
================================================================= |
Total params: 361,866 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.