text
stringlengths 0
4.99k
|
---|
We recommend creating such sublayers in the __init__() method (since the sublayers will typically have a build method, they will be built when the outer layer gets built). |
# Let's assume we are reusing the Linear class |
# with a `build` method that we defined above. |
class MLPBlock(keras.layers.Layer): |
def __init__(self): |
super(MLPBlock, self).__init__() |
self.linear_1 = Linear(32) |
self.linear_2 = Linear(32) |
self.linear_3 = Linear(1) |
def call(self, inputs): |
x = self.linear_1(inputs) |
x = tf.nn.relu(x) |
x = self.linear_2(x) |
x = tf.nn.relu(x) |
return self.linear_3(x) |
mlp = MLPBlock() |
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights |
print("weights:", len(mlp.weights)) |
print("trainable weights:", len(mlp.trainable_weights)) |
weights: 6 |
trainable weights: 6 |
The add_loss() method |
When writing the call() method of a layer, you can create loss tensors that you will want to use later, when writing your training loop. This is doable by calling self.add_loss(value): |
# A layer that creates an activity regularization loss |
class ActivityRegularizationLayer(keras.layers.Layer): |
def __init__(self, rate=1e-2): |
super(ActivityRegularizationLayer, self).__init__() |
self.rate = rate |
def call(self, inputs): |
self.add_loss(self.rate * tf.reduce_sum(inputs)) |
return inputs |
These losses (including those created by any inner layer) can be retrieved via layer.losses. This property is reset at the start of every __call__() to the top-level layer, so that layer.losses always contains the loss values created during the last forward pass. |
class OuterLayer(keras.layers.Layer): |
def __init__(self): |
super(OuterLayer, self).__init__() |
self.activity_reg = ActivityRegularizationLayer(1e-2) |
def call(self, inputs): |
return self.activity_reg(inputs) |
layer = OuterLayer() |
assert len(layer.losses) == 0 # No losses yet since the layer has never been called |
_ = layer(tf.zeros(1, 1)) |
assert len(layer.losses) == 1 # We created one loss value |
# `layer.losses` gets reset at the start of each __call__ |
_ = layer(tf.zeros(1, 1)) |
assert len(layer.losses) == 1 # This is the loss created during the call above |
In addition, the loss property also contains regularization losses created for the weights of any inner layer: |
class OuterLayerWithKernelRegularizer(keras.layers.Layer): |
def __init__(self): |
super(OuterLayerWithKernelRegularizer, self).__init__() |
self.dense = keras.layers.Dense( |
32, kernel_regularizer=tf.keras.regularizers.l2(1e-3) |
) |
def call(self, inputs): |
return self.dense(inputs) |
layer = OuterLayerWithKernelRegularizer() |
_ = layer(tf.zeros((1, 1))) |
# This is `1e-3 * sum(layer.dense.kernel ** 2)`, |
# created by the `kernel_regularizer` above. |
print(layer.losses) |
[<tf.Tensor: shape=(), dtype=float32, numpy=0.0018842274>] |
These losses are meant to be taken into account when writing training loops, like this: |
# Instantiate an optimizer. |
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3) |
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) |
# Iterate over the batches of a dataset. |
for x_batch_train, y_batch_train in train_dataset: |
with tf.GradientTape() as tape: |
logits = layer(x_batch_train) # Logits for this minibatch |
# Loss value for this minibatch |
loss_value = loss_fn(y_batch_train, logits) |
# Add extra losses created during this forward pass: |
loss_value += sum(model.losses) |
grads = tape.gradient(loss_value, model.trainable_weights) |
optimizer.apply_gradients(zip(grads, model.trainable_weights)) |
For a detailed guide about writing training loops, see the guide to writing a training loop from scratch. |
These losses also work seamlessly with fit() (they get automatically summed and added to the main loss, if any): |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.