text
stringlengths
0
4.99k
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
Creating a new model
98/1563 [>.............................] - ETA: 1s - loss: 1.3456 - sparse_categorical_accuracy: 0.6230INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.90/assets
155/1563 [=>............................] - ETA: 5s - loss: 1.1479 - sparse_categorical_accuracy: 0.6822INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.66/assets
248/1563 [===>..........................] - ETA: 5s - loss: 0.9643 - sparse_categorical_accuracy: 0.7340INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.57/assets
353/1563 [=====>........................] - ETA: 5s - loss: 0.8454 - sparse_categorical_accuracy: 0.7668INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.50/assets
449/1563 [=======>......................] - ETA: 5s - loss: 0.7714 - sparse_categorical_accuracy: 0.7870INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.47/assets
598/1563 [==========>...................] - ETA: 4s - loss: 0.6930 - sparse_categorical_accuracy: 0.8082INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.43/assets
658/1563 [===========>..................] - ETA: 4s - loss: 0.6685 - sparse_categorical_accuracy: 0.8148INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.41/assets
757/1563 [=============>................] - ETA: 4s - loss: 0.6340 - sparse_categorical_accuracy: 0.8241INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.39/assets
856/1563 [===============>..............] - ETA: 3s - loss: 0.6051 - sparse_categorical_accuracy: 0.8319INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.37/assets
956/1563 [=================>............] - ETA: 3s - loss: 0.5801 - sparse_categorical_accuracy: 0.8387INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.35/assets
1057/1563 [===================>..........] - ETA: 2s - loss: 0.5583 - sparse_categorical_accuracy: 0.8446INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.34/assets
1153/1563 [=====================>........] - ETA: 2s - loss: 0.5399 - sparse_categorical_accuracy: 0.8495INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.33/assets
1255/1563 [=======================>......] - ETA: 1s - loss: 0.5225 - sparse_categorical_accuracy: 0.8542INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.32/assets
1355/1563 [=========================>....] - ETA: 1s - loss: 0.5073 - sparse_categorical_accuracy: 0.8583INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.31/assets
1457/1563 [==========================>...] - ETA: 0s - loss: 0.4933 - sparse_categorical_accuracy: 0.8621INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.30/assets
1563/1563 [==============================] - 8s 5ms/step - loss: 0.4800 - sparse_categorical_accuracy: 0.8657
<tensorflow.python.keras.callbacks.History at 0x14ec08390>
You call also write your own callback for saving and restoring models.
For a complete guide on serialization and saving, see the guide to saving and serializing Models.
Using learning rate schedules
A common pattern when training deep learning models is to gradually reduce the learning as training progresses. This is generally known as "learning rate decay".
The learning decay schedule could be static (fixed in advance, as a function of the current epoch or the current batch index), or dynamic (responding to the current behavior of the model, in particular the validation loss).
Passing a schedule to an optimizer
You can easily use a static learning rate decay schedule by passing a schedule object as the learning_rate argument in your optimizer:
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
Several built-in schedules are available: ExponentialDecay, PiecewiseConstantDecay, PolynomialDecay, and InverseTimeDecay.
Using callbacks to implement a dynamic learning rate schedule
A dynamic learning rate schedule (for instance, decreasing the learning rate when the validation loss is no longer improving) cannot be achieved with these schedule objects, since the optimizer does not have access to validation metrics.
However, callbacks do have access to all metrics, including validation metrics! You can thus achieve this pattern by using a callback that modifies the current learning rate on the optimizer. In fact, this is even built-in as the ReduceLROnPlateau callback.
Visualizing loss and metrics during training
The best way to keep an eye on your model during training is to use TensorBoard -- a browser-based application that you can run locally that provides you with:
Live plots of the loss and metrics for training and evaluation
(optionally) Visualizations of the histograms of your layer activations
(optionally) 3D visualizations of the embedding spaces learned by your Embedding layers
If you have installed TensorFlow with pip, you should be able to launch TensorBoard from the command line:
tensorboard --logdir=/full_path_to_your_logs
Using the TensorBoard callback
The easiest way to use TensorBoard with a Keras model and the fit() method is the TensorBoard callback.Customizing what happens in fit()
Author: fchollet
Date created: 2020/04/15
Last modified: 2020/04/15
Description: Complete guide to overriding the training step of the Model class.
View in Colab • GitHub source
Introduction
When you're doing supervised learning, you can use fit() and everything works smoothly.
When you need to write your own training loop from scratch, you can use the GradientTape and take control of every little detail.
But what if you need a custom training algorithm, but you still want to benefit from the convenient features of fit(), such as callbacks, built-in distribution support, or step fusing?
A core principle of Keras is progressive disclosure of complexity. You should always be able to get into lower-level workflows in a gradual way. You shouldn't fall off a cliff if the high-level functionality doesn't exactly match your use case. You should be able to gain more control over the small details while retaining a commensurate amount of high-level convenience.
When you need to customize what fit() does, you should override the training step function of the Model class. This is the function that is called by fit() for every batch of data. You will then be able to call fit() as usual -- and it will be running your own learning algorithm.