text
stringlengths 0
4.99k
|
---|
inputs = keras.Input(shape=(150, 150, 3)) |
# We make sure that the base_model is running in inference mode here, |
# by passing `training=False`. This is important for fine-tuning, as you will |
# learn in a few paragraphs. |
x = base_model(inputs, training=False) |
# Convert features of shape `base_model.output_shape[1:]` to vectors |
x = keras.layers.GlobalAveragePooling2D()(x) |
# A Dense classifier with a single unit (binary classification) |
outputs = keras.layers.Dense(1)(x) |
model = keras.Model(inputs, outputs) |
Train the model on new data. |
model.compile(optimizer=keras.optimizers.Adam(), |
loss=keras.losses.BinaryCrossentropy(from_logits=True), |
metrics=[keras.metrics.BinaryAccuracy()]) |
model.fit(new_dataset, epochs=20, callbacks=..., validation_data=...) |
Fine-tuning |
Once your model has converged on the new data, you can try to unfreeze all or part of the base model and retrain the whole model end-to-end with a very low learning rate. |
This is an optional last step that can potentially give you incremental improvements. It could also potentially lead to quick overfitting -- keep that in mind. |
It is critical to only do this step after the model with frozen layers has been trained to convergence. If you mix randomly-initialized trainable layers with trainable layers that hold pre-trained features, the randomly-initialized layers will cause very large gradient updates during training, which will destroy your pre-trained features. |
It's also critical to use a very low learning rate at this stage, because you are training a much larger model than in the first round of training, on a dataset that is typically very small. As a result, you are at risk of overfitting very quickly if you apply large weight updates. Here, you only want to readapt the pretrained weights in an incremental way. |
This is how to implement fine-tuning of the whole base model: |
# Unfreeze the base model |
base_model.trainable = True |
# It's important to recompile your model after you make any changes |
# to the `trainable` attribute of any inner layer, so that your changes |
# are take into account |
model.compile(optimizer=keras.optimizers.Adam(1e-5), # Very low learning rate |
loss=keras.losses.BinaryCrossentropy(from_logits=True), |
metrics=[keras.metrics.BinaryAccuracy()]) |
# Train end-to-end. Be careful to stop before you overfit! |
model.fit(new_dataset, epochs=10, callbacks=..., validation_data=...) |
Important note about compile() and trainable |
Calling compile() on a model is meant to "freeze" the behavior of that model. This implies that the trainable attribute values at the time the model is compiled should be preserved throughout the lifetime of that model, until compile is called again. Hence, if you change any trainable value, make sure to call compile() again on your model for your changes to be taken into account. |
Important notes about BatchNormalization layer |
Many image models contain BatchNormalization layers. That layer is a special case on every imaginable count. Here are a few things to keep in mind. |
BatchNormalization contains 2 non-trainable weights that get updated during training. These are the variables tracking the mean and variance of the inputs. |
When you set bn_layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean & variance statistics. This is not the case for other layers in general, as weight trainability & inference/training modes are two orthogonal concepts. But the two are tied in the case of the BatchNormalization layer. |
When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training=False when calling the base model. Otherwise the updates applied to the non-trainable weights will suddenly destroy what the model has learned. |
You'll see this pattern in action in the end-to-end example at the end of this guide. |
Transfer learning & fine-tuning with a custom training loop |
If instead of fit(), you are using your own low-level training loop, the workflow stays essentially the same. You should be careful to only take into account the list model.trainable_weights when applying gradient updates: |
# Create base model |
base_model = keras.applications.Xception( |
weights='imagenet', |
input_shape=(150, 150, 3), |
include_top=False) |
# Freeze base model |
base_model.trainable = False |
# Create new model on top. |
inputs = keras.Input(shape=(150, 150, 3)) |
x = base_model(inputs, training=False) |
x = keras.layers.GlobalAveragePooling2D()(x) |
outputs = keras.layers.Dense(1)(x) |
model = keras.Model(inputs, outputs) |
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) |
optimizer = keras.optimizers.Adam() |
# Iterate over the batches of a dataset. |
for inputs, targets in new_dataset: |
# Open a GradientTape. |
with tf.GradientTape() as tape: |
# Forward pass. |
predictions = model(inputs) |
# Compute the loss value for this batch. |
loss_value = loss_fn(targets, predictions) |
# Get gradients of loss wrt the *trainable* weights. |
gradients = tape.gradient(loss_value, model.trainable_weights) |
# Update the weights of the model. |
optimizer.apply_gradients(zip(gradients, model.trainable_weights)) |
Likewise for fine-tuning. |
An end-to-end example: fine-tuning an image classification model on a cats vs. dogs dataset |
To solidify these concepts, let's walk you through a concrete end-to-end transfer learning & fine-tuning example. We will load the Xception model, pre-trained on ImageNet, and use it on the Kaggle "cats vs. dogs" classification dataset. |
Getting the data |
First, let's fetch the cats vs. dogs dataset using TFDS. If you have your own dataset, you'll probably want to use the utility tf.keras.preprocessing.image_dataset_from_directory to generate similar labeled dataset objects from a set of images on disk filed into class-specific folders. |
Transfer learning is most useful when working with very small datasets. To keep our dataset small, we will use 40% of the original training data (25,000 images) for training, 10% for validation, and 10% for testing. |
import tensorflow_datasets as tfds |
tfds.disable_progress_bar() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.