text
stringlengths 0
4.99k
|
---|
# We recommend using "post" padding when working with RNN layers |
# (in order to be able to use the |
# CuDNN implementation of the layers). |
padded_inputs = tf.keras.preprocessing.sequence.pad_sequences( |
raw_inputs, padding="post" |
) |
print(padded_inputs) |
[[ 711 632 71 0 0 0] |
[ 73 8 3215 55 927 0] |
[ 83 91 1 645 1253 927]] |
Masking |
Now that all samples have a uniform length, the model must be informed that some part of the data is actually padding and should be ignored. That mechanism is masking. |
There are three ways to introduce input masks in Keras models: |
Add a keras.layers.Masking layer. |
Configure a keras.layers.Embedding layer with mask_zero=True. |
Pass a mask argument manually when calling layers that support this argument (e.g. RNN layers). |
Mask-generating layers: Embedding and Masking |
Under the hood, these layers will create a mask tensor (2D tensor with shape (batch, sequence_length)), and attach it to the tensor output returned by the Masking or Embedding layer. |
embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True) |
masked_output = embedding(padded_inputs) |
print(masked_output._keras_mask) |
masking_layer = layers.Masking() |
# Simulate the embedding lookup by expanding the 2D input to 3D, |
# with embedding dimension of 10. |
unmasked_embedding = tf.cast( |
tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32 |
) |
masked_embedding = masking_layer(unmasked_embedding) |
print(masked_embedding._keras_mask) |
tf.Tensor( |
[[ True True True False False False] |
[ True True True True True False] |
[ True True True True True True]], shape=(3, 6), dtype=bool) |
tf.Tensor( |
[[ True True True False False False] |
[ True True True True True False] |
[ True True True True True True]], shape=(3, 6), dtype=bool) |
As you can see from the printed result, the mask is a 2D boolean tensor with shape (batch_size, sequence_length), where each individual False entry indicates that the corresponding timestep should be ignored during processing. |
Mask propagation in the Functional API and Sequential API |
When using the Functional API or the Sequential API, a mask generated by an Embedding or Masking layer will be propagated through the network for any layer that is capable of using them (for example, RNN layers). Keras will automatically fetch the mask corresponding to an input and pass it to any layer that knows how to use it. |
For instance, in the following Sequential model, the LSTM layer will automatically receive a mask, which means it will ignore padded values: |
model = keras.Sequential( |
[layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True), layers.LSTM(32),] |
) |
This is also the case for the following Functional API model: |
inputs = keras.Input(shape=(None,), dtype="int32") |
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs) |
outputs = layers.LSTM(32)(x) |
model = keras.Model(inputs, outputs) |
Passing mask tensors directly to layers |
Layers that can handle masks (such as the LSTM layer) have a mask argument in their __call__ method. |
Meanwhile, layers that produce a mask (e.g. Embedding) expose a compute_mask(input, previous_mask) method which you can call. |
Thus, you can pass the output of the compute_mask() method of a mask-producing layer to the __call__ method of a mask-consuming layer, like this: |
class MyLayer(layers.Layer): |
def __init__(self, **kwargs): |
super(MyLayer, self).__init__(**kwargs) |
self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True) |
self.lstm = layers.LSTM(32) |
def call(self, inputs): |
x = self.embedding(inputs) |
# Note that you could also prepare a `mask` tensor manually. |
# It only needs to be a boolean tensor |
# with the right shape, i.e. (batch_size, timesteps). |
mask = self.embedding.compute_mask(inputs) |
output = self.lstm(x, mask=mask) # The layer will ignore the masked values |
return output |
layer = MyLayer() |
x = np.random.random((32, 10)) * 100 |
x = x.astype("int32") |
layer(x) |
<tf.Tensor: shape=(32, 32), dtype=float32, numpy= |
array([[ 9.3598114e-03, -5.4868571e-03, -1.2649748e-02, ..., |
1.3104092e-03, -1.8691338e-03, 1.6320259e-03], |
[-6.0183648e-03, -4.9164523e-03, 3.0082103e-03, ..., |
1.7394881e-03, 9.1036235e-04, -1.2966867e-02], |
[ 6.0863183e-03, 1.3509918e-03, -7.1913302e-03, ..., |
3.9419280e-03, 2.9930705e-03, 3.4562423e-04], |
..., |
[-5.7978416e-04, -1.8325391e-03, -2.0467002e-04, ..., |
-3.9534271e-03, -2.2688047e-04, 1.2577593e-03], |
[ 2.4689233e-03, -3.6403039e-04, 7.7487719e-05, ..., |
1.0208538e-03, 2.3937733e-03, -4.4873711e-03], |
[ 2.6551904e-03, -1.8738948e-03, -1.9827935e-04, ..., |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.