text
stringlengths 0
4.99k
|
---|
layer.build((None, 4)) # Create the weights |
print("weights:", len(layer.weights)) |
print("trainable_weights:", len(layer.trainable_weights)) |
print("non_trainable_weights:", len(layer.non_trainable_weights)) |
weights: 4 |
trainable_weights: 2 |
non_trainable_weights: 2 |
Layers & models also feature a boolean attribute trainable. Its value can be changed. Setting layer.trainable to False moves all the layer's weights from trainable to non-trainable. This is called "freezing" the layer: the state of a frozen layer won't be updated during training (either when training with fit() or when training with any custom loop that relies on trainable_weights to apply gradient updates). |
Example: setting trainable to False |
layer = keras.layers.Dense(3) |
layer.build((None, 4)) # Create the weights |
layer.trainable = False # Freeze the layer |
print("weights:", len(layer.weights)) |
print("trainable_weights:", len(layer.trainable_weights)) |
print("non_trainable_weights:", len(layer.non_trainable_weights)) |
weights: 2 |
trainable_weights: 0 |
non_trainable_weights: 2 |
When a trainable weight becomes non-trainable, its value is no longer updated during training. |
# Make a model with 2 layers |
layer1 = keras.layers.Dense(3, activation="relu") |
layer2 = keras.layers.Dense(3, activation="sigmoid") |
model = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2]) |
# Freeze the first layer |
layer1.trainable = False |
# Keep a copy of the weights of layer1 for later reference |
initial_layer1_weights_values = layer1.get_weights() |
# Train the model |
model.compile(optimizer="adam", loss="mse") |
model.fit(np.random.random((2, 3)), np.random.random((2, 3))) |
# Check that the weights of layer1 have not changed during training |
final_layer1_weights_values = layer1.get_weights() |
np.testing.assert_allclose( |
initial_layer1_weights_values[0], final_layer1_weights_values[0] |
) |
np.testing.assert_allclose( |
initial_layer1_weights_values[1], final_layer1_weights_values[1] |
) |
1/1 [==============================] - 0s 1ms/step - loss: 0.0846 |
Do not confuse the layer.trainable attribute with the argument training in layer.__call__() (which controls whether the layer should run its forward pass in inference mode or training mode). For more information, see the Keras FAQ. |
Recursive setting of the trainable attribute |
If you set trainable = False on a model or on any layer that has sublayers, all children layers become non-trainable as well. |
Example: |
inner_model = keras.Sequential( |
[ |
keras.Input(shape=(3,)), |
keras.layers.Dense(3, activation="relu"), |
keras.layers.Dense(3, activation="relu"), |
] |
) |
model = keras.Sequential( |
[keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation="sigmoid"),] |
) |
model.trainable = False # Freeze the outer model |
assert inner_model.trainable == False # All layers in `model` are now frozen |
assert inner_model.layers[0].trainable == False # `trainable` is propagated recursively |
The typical transfer-learning workflow |
This leads us to how a typical transfer learning workflow can be implemented in Keras: |
Instantiate a base model and load pre-trained weights into it. |
Freeze all layers in the base model by setting trainable = False. |
Create a new model on top of the output of one (or several) layers from the base model. |
Train your new model on your new dataset. |
Note that an alternative, more lightweight workflow could also be: |
Instantiate a base model and load pre-trained weights into it. |
Run your new dataset through it and record the output of one (or several) layers from the base model. This is called feature extraction. |
Use that output as input data for a new, smaller model. |
A key advantage of that second workflow is that you only run the base model once on your data, rather than once per epoch of training. So it's a lot faster & cheaper. |
An issue with that second workflow, though, is that it doesn't allow you to dynamically modify the input data of your new model during training, which is required when doing data augmentation, for instance. Transfer learning is typically used for tasks when your new dataset has too little data to train a full-scale model from scratch, and in such scenarios data augmentation is very important. So in what follows, we will focus on the first workflow. |
Here's what the first workflow looks like in Keras: |
First, instantiate a base model with pre-trained weights. |
base_model = keras.applications.Xception( |
weights='imagenet', # Load weights pre-trained on ImageNet. |
input_shape=(150, 150, 3), |
include_top=False) # Do not include the ImageNet classifier at the top. |
Then, freeze the base model. |
base_model.trainable = False |
Create a new model on top. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.