text
stringlengths
0
4.99k
Arguments:
patience: Number of epochs to wait after min has been hit. After this
number of no improvement, training stops.
"""
def __init__(self, patience=0):
super(EarlyStoppingAtMinLoss, self).__init__()
self.patience = patience
# best_weights to store the weights at which the minimum loss occurs.
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best = np.Inf
def on_epoch_end(self, epoch, logs=None):
current = logs.get("loss")
if np.less(current, self.best):
self.best = current
self.wait = 0
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))
model = get_model()
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=30,
verbose=0,
callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()],
)
For batch 0, loss is 34.49.
For batch 1, loss is 438.63.
For batch 2, loss is 301.08.
For batch 3, loss is 228.22.
For batch 4, loss is 183.83.
The average loss for epoch 0 is 183.83 and mean absolute error is 8.24.
For batch 0, loss is 9.19.
For batch 1, loss is 7.99.
For batch 2, loss is 7.32.
For batch 3, loss is 6.83.
For batch 4, loss is 6.31.
The average loss for epoch 1 is 6.31 and mean absolute error is 2.07.
For batch 0, loss is 5.26.
For batch 1, loss is 4.62.
For batch 2, loss is 4.51.
For batch 3, loss is 4.56.
For batch 4, loss is 4.52.
The average loss for epoch 2 is 4.52 and mean absolute error is 1.72.
For batch 0, loss is 4.36.
For batch 1, loss is 6.15.
For batch 2, loss is 10.84.
For batch 3, loss is 17.60.
For batch 4, loss is 26.95.
The average loss for epoch 3 is 26.95 and mean absolute error is 4.29.
Restoring model weights from the end of the best epoch.
Epoch 00004: early stopping
<tensorflow.python.keras.callbacks.History at 0x15e0f08d0>
Learning rate scheduling
In this example, we show how a custom Callback can be used to dynamically change the learning rate of the optimizer during the course of training.
See callbacks.LearningRateScheduler for a more general implementations.
class CustomLearningRateScheduler(keras.callbacks.Callback):
"""Learning rate scheduler which sets the learning rate according to schedule.
Arguments:
schedule: a function that takes an epoch index
(integer, indexed from 0) and current learning rate
as inputs and returns a new learning rate as output (float).
"""
def __init__(self, schedule):
super(CustomLearningRateScheduler, self).__init__()
self.schedule = schedule
def on_epoch_begin(self, epoch, logs=None):
if not hasattr(self.model.optimizer, "lr"):
raise ValueError('Optimizer must have a "lr" attribute.')
# Get the current learning rate from model's optimizer.
lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate))