text
stringlengths 0
4.99k
|
---|
Trainable params: 361,866 |
Non-trainable params: 0 |
_________________________________________________________________ |
In addition, a RNN layer can return its final internal state(s). The returned states can be used to resume the RNN execution later, or to initialize another RNN. This setting is commonly used in the encoder-decoder sequence-to-sequence model, where the encoder final state is used as the initial state of the decoder. |
To configure a RNN layer to return its internal state, set the return_state parameter to True when creating the layer. Note that LSTM has 2 state tensors, but GRU only has one. |
To configure the initial state of the layer, just call the layer with additional keyword argument initial_state. Note that the shape of the state needs to match the unit size of the layer, like in the example below. |
encoder_vocab = 1000 |
decoder_vocab = 2000 |
encoder_input = layers.Input(shape=(None,)) |
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)( |
encoder_input |
) |
# Return states in addition to output |
output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")( |
encoder_embedded |
) |
encoder_state = [state_h, state_c] |
decoder_input = layers.Input(shape=(None,)) |
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)( |
decoder_input |
) |
# Pass the 2 states to a new LSTM layer, as initial state |
decoder_output = layers.LSTM(64, name="decoder")( |
decoder_embedded, initial_state=encoder_state |
) |
output = layers.Dense(10)(decoder_output) |
model = keras.Model([encoder_input, decoder_input], output) |
model.summary() |
Model: "model" |
__________________________________________________________________________________________________ |
Layer (type) Output Shape Param # Connected to |
================================================================================================== |
input_1 (InputLayer) [(None, None)] 0 |
__________________________________________________________________________________________________ |
input_2 (InputLayer) [(None, None)] 0 |
__________________________________________________________________________________________________ |
embedding_2 (Embedding) (None, None, 64) 64000 input_1[0][0] |
__________________________________________________________________________________________________ |
embedding_3 (Embedding) (None, None, 64) 128000 input_2[0][0] |
__________________________________________________________________________________________________ |
encoder (LSTM) [(None, 64), (None, 33024 embedding_2[0][0] |
__________________________________________________________________________________________________ |
decoder (LSTM) (None, 64) 33024 embedding_3[0][0] |
encoder[0][1] |
encoder[0][2] |
__________________________________________________________________________________________________ |
dense_2 (Dense) (None, 10) 650 decoder[0][0] |
================================================================================================== |
Total params: 258,698 |
Trainable params: 258,698 |
Non-trainable params: 0 |
__________________________________________________________________________________________________ |
RNN layers and RNN cells |
In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only processes a single timestep. |
The cell is the inside of the for loop of a RNN layer. Wrapping a cell inside a keras.layers.RNN layer gives you a layer capable of processing batches of sequences, e.g. RNN(LSTMCell(10)). |
Mathematically, RNN(LSTMCell(10)) produces the same result as LSTM(10). In fact, the implementation of this layer in TF v1.x was just creating the corresponding RNN cell and wrapping it in a RNN layer. However using the built-in GRU and LSTM layers enable the use of CuDNN and you may see better performance. |
There are three built-in RNN cells, each of them corresponding to the matching RNN layer. |
keras.layers.SimpleRNNCell corresponds to the SimpleRNN layer. |
keras.layers.GRUCell corresponds to the GRU layer. |
keras.layers.LSTMCell corresponds to the LSTM layer. |
The cell abstraction, together with the generic keras.layers.RNN class, make it very easy to implement custom RNN architectures for your research. |
Cross-batch statefulness |
When processing very long sequences (possibly infinite), you may want to use the pattern of cross-batch statefulness. |
Normally, the internal state of a RNN layer is reset every time it sees a new batch (i.e. every sample seen by the layer is assumed to be independent of the past). The layer will only maintain a state while processing a given sample. |
If you have very long sequences though, it is useful to break them into shorter sequences, and to feed these shorter sequences sequentially into a RNN layer without resetting the layer's state. That way, the layer can retain information about the entirety of the sequence, even though it's only seeing one sub-sequence at a time. |
You can do this by setting stateful=True in the constructor. |
If you have a sequence s = [t0, t1, ... t1546, t1547], you would split it into e.g. |
s1 = [t0, t1, ... t100] |
s2 = [t101, ... t201] |
... |
s16 = [t1501, ... t1547] |
Then you would process it via: |
lstm_layer = layers.LSTM(64, stateful=True) |
for s in sub_sequences: |
output = lstm_layer(s) |
When you want to clear the state, you can use layer.reset_states(). |
Note: In this setup, sample i in a given batch is assumed to be the continuation of sample i in the previous batch. This means that all batches should contain the same number of samples (batch size). E.g. if a batch contains [sequence_A_from_t0_to_t100, sequence_B_from_t0_to_t100], the next batch should contain [sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200]. |
Here is a complete example: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.