text
stringlengths
0
4.99k
monitor: The metric name to monitor. Typically the metrics are set by the Model.compile method. Note:
Prefix the name with "val_" to monitor validation metrics.
Use "loss" or "val_loss" to monitor the model's total loss.
If you specify metrics as strings, like "accuracy", pass the same string (with or without the "val_" prefix).
If you pass metrics.Metric objects, monitor should be set to metric.name
If you're not sure about the metric names you can check the contents of the history.history dictionary returned by history = model.fit()
Multi-output models set additional prefixes on the metric names.
verbose: verbosity mode, 0 or 1.
save_best_only: if save_best_only=True, it only saves when the model is considered the "best" and the latest best model according to the quantity monitored will not be overwritten. If filepath doesn't contain formatting options like {epoch} then filepath will be overwritten by each new better model.
mode: one of {'auto', 'min', 'max'}. If save_best_only=True, the decision to overwrite the current save file is made based on either the maximization or the minimization of the monitored quantity. For val_acc, this should be max, for val_loss this should be min, etc. In auto mode, the mode is set to max if the quantities monitored are 'acc' or start with 'fmeasure' and are set to min for the rest of the quantities.
save_weights_only: if True, then only the model's weights will be saved (model.save_weights(filepath)), else the full model is saved (model.save(filepath)).
save_freq: 'epoch' or integer. When using 'epoch', the callback saves the model after each epoch. When using integer, the callback saves the model at end of this many batches. If the Model is compiled with steps_per_execution=N, then the saving criteria will be checked every Nth batch. Note that if the saving isn't aligned to epochs, the monitored metric may potentially be less reliable (it could reflect as little as 1 batch, since the metrics get reset every epoch). Defaults to 'epoch'.
options: Optional tf.train.CheckpointOptions object if save_weights_only is true or optional tf.saved_model.SaveOptions object if save_weights_only is false.
**kwargs: Additional arguments for backwards compatibility. Possible key is period.LearningRateScheduler
LearningRateScheduler class
tf.keras.callbacks.LearningRateScheduler(schedule, verbose=0)
Learning rate scheduler.
At the beginning of every epoch, this callback gets the updated learning rate value from schedule function provided at __init__, with the current epoch and current learning rate, and applies the updated learning rate on the optimizer.
Arguments
schedule: a function that takes an epoch index (integer, indexed from 0) and current learning rate (float) as inputs and returns a new learning rate as output (float).
verbose: int. 0: quiet, 1: update messages.
Example
>>> # This function keeps the initial learning rate for the first ten epochs
>>> # and decreases it exponentially after that.
>>> def scheduler(epoch, lr):
... if epoch < 10:
... return lr
... else:
... return lr * tf.math.exp(-0.1)
>>>
>>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
>>> model.compile(tf.keras.optimizers.SGD(), loss='mse')
>>> round(model.optimizer.lr.numpy(), 5)
0.01
>>> callback = tf.keras.callbacks.LearningRateScheduler(scheduler)
>>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),
... epochs=15, callbacks=[callback], verbose=0)
>>> round(model.optimizer.lr.numpy(), 5)
0.00607ReduceLROnPlateau
ReduceLROnPlateau class
tf.keras.callbacks.ReduceLROnPlateau(
monitor="val_loss",
factor=0.1,
patience=10,
verbose=0,
mode="auto",
min_delta=0.0001,
cooldown=0,
min_lr=0,
**kwargs
)
Reduce learning rate when a metric has stopped improving.
Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
Example
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=5, min_lr=0.001)
model.fit(X_train, Y_train, callbacks=[reduce_lr])
Arguments
monitor: quantity to be monitored.
factor: factor by which the learning rate will be reduced. new_lr = lr * factor.
patience: number of epochs with no improvement after which learning rate will be reduced.
verbose: int. 0: quiet, 1: update messages.
mode: one of {'auto', 'min', 'max'}. In 'min' mode, the learning rate will be reduced when the quantity monitored has stopped decreasing; in 'max' mode it will be reduced when the quantity monitored has stopped increasing; in 'auto' mode, the direction is automatically inferred from the name of the monitored quantity.
min_delta: threshold for measuring the new optimum, to only focus on significant changes.
cooldown: number of epochs to wait before resuming normal operation after lr has been reduced.
min_lr: lower bound on the learning rate.
CSVLogger
CSVLogger class
tf.keras.callbacks.CSVLogger(filename, separator=",", append=False)
Callback that streams epoch results to a CSV file.
Supports all values that can be represented as a string, including 1D iterables such as np.ndarray.
Example
csv_logger = CSVLogger('training.log')
model.fit(X_train, Y_train, callbacks=[csv_logger])
Arguments
filename: Filename of the CSV file, e.g. 'run/log.csv'.
separator: String used to separate elements in the CSV file.
append: Boolean. True: append if file exists (useful for continuing training). False: overwrite existing file.
LambdaCallback
LambdaCallback class
tf.keras.callbacks.LambdaCallback(
on_epoch_begin=None,
on_epoch_end=None,
on_batch_begin=None,