text
stringlengths 0
4.99k
|
---|
Doing fine-tuning of the top layers when training seems to be plateauing |
Sending email or instant message notifications when training ends or where a certain performance threshold is exceeded |
Etc. |
Callbacks can be passed as a list to your call to fit(): |
model = get_compiled_model() |
callbacks = [ |
keras.callbacks.EarlyStopping( |
# Stop training when `val_loss` is no longer improving |
monitor="val_loss", |
# "no longer improving" being defined as "no better than 1e-2 less" |
min_delta=1e-2, |
# "no longer improving" being further defined as "for at least 2 epochs" |
patience=2, |
verbose=1, |
) |
] |
model.fit( |
x_train, |
y_train, |
epochs=20, |
batch_size=64, |
callbacks=callbacks, |
validation_split=0.2, |
) |
Epoch 1/20 |
625/625 [==============================] - 1s 1ms/step - loss: 0.6032 - sparse_categorical_accuracy: 0.8355 - val_loss: 0.2303 - val_sparse_categorical_accuracy: 0.9306 |
Epoch 2/20 |
625/625 [==============================] - 1s 1ms/step - loss: 0.1855 - sparse_categorical_accuracy: 0.9458 - val_loss: 0.1775 - val_sparse_categorical_accuracy: 0.9471 |
Epoch 3/20 |
625/625 [==============================] - 1s 1ms/step - loss: 0.1280 - sparse_categorical_accuracy: 0.9597 - val_loss: 0.1585 - val_sparse_categorical_accuracy: 0.9531 |
Epoch 4/20 |
625/625 [==============================] - 1s 1ms/step - loss: 0.0986 - sparse_categorical_accuracy: 0.9704 - val_loss: 0.1418 - val_sparse_categorical_accuracy: 0.9593 |
Epoch 5/20 |
625/625 [==============================] - 1s 1ms/step - loss: 0.0774 - sparse_categorical_accuracy: 0.9761 - val_loss: 0.1319 - val_sparse_categorical_accuracy: 0.9628 |
Epoch 6/20 |
625/625 [==============================] - 1s 1ms/step - loss: 0.0649 - sparse_categorical_accuracy: 0.9798 - val_loss: 0.1465 - val_sparse_categorical_accuracy: 0.9580 |
Epoch 00006: early stopping |
<tensorflow.python.keras.callbacks.History at 0x14e899ad0> |
Many built-in callbacks are available |
There are many built-in callbacks already available in Keras, such as: |
ModelCheckpoint: Periodically save the model. |
EarlyStopping: Stop training when training is no longer improving the validation metrics. |
TensorBoard: periodically write model logs that can be visualized in TensorBoard (more details in the section "Visualization"). |
CSVLogger: streams loss and metrics data to a CSV file. |
etc. |
See the callbacks documentation for the complete list. |
Writing your own callback |
You can create a custom callback by extending the base class keras.callbacks.Callback. A callback has access to its associated model through the class property self.model. |
Make sure to read the complete guide to writing custom callbacks. |
Here's a simple example saving a list of per-batch loss values during training: |
class LossHistory(keras.callbacks.Callback): |
def on_train_begin(self, logs): |
self.per_batch_losses = [] |
def on_batch_end(self, batch, logs): |
self.per_batch_losses.append(logs.get("loss")) |
Checkpointing models |
When you're training model on relatively large datasets, it's crucial to save checkpoints of your model at frequent intervals. |
The easiest way to achieve this is with the ModelCheckpoint callback: |
model = get_compiled_model() |
callbacks = [ |
keras.callbacks.ModelCheckpoint( |
# Path where to save the model |
# The two parameters below mean that we will overwrite |
# the current checkpoint if and only if |
# the `val_loss` score has improved. |
# The saved model name will include the current epoch. |
filepath="mymodel_{epoch}", |
save_best_only=True, # Only save a model if `val_loss` has improved. |
monitor="val_loss", |
verbose=1, |
) |
] |
model.fit( |
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2 |
) |
Epoch 1/2 |
625/625 [==============================] - 1s 1ms/step - loss: 0.6380 - sparse_categorical_accuracy: 0.8226 - val_loss: 0.2283 - val_sparse_categorical_accuracy: 0.9317 |
Epoch 00001: val_loss improved from inf to 0.22825, saving model to mymodel_1 |
INFO:tensorflow:Assets written to: mymodel_1/assets |
Epoch 2/2 |
625/625 [==============================] - 1s 1ms/step - loss: 0.1787 - sparse_categorical_accuracy: 0.9466 - val_loss: 0.1877 - val_sparse_categorical_accuracy: 0.9440 |
Epoch 00002: val_loss improved from 0.22825 to 0.18768, saving model to mymodel_2 |
INFO:tensorflow:Assets written to: mymodel_2/assets |
<tensorflow.python.keras.callbacks.History at 0x14e899b90> |
The ModelCheckpoint callback can be used to implement fault-tolerance: the ability to restart training from the last saved state of the model in case training gets randomly interrupted. Here's a basic example: |
import os |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.