text
stringlengths
0
4.99k
learning_rate: A Tensor or a floating point value. The learning rate.
beta_1: A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
beta_2: A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm.
epsilon: A small constant for numerical stability.
name: Optional name for the operations created when applying gradients. Defaults to "Nadam".
**kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value.
Usage # Example
opt = tf.keras.optimizers.Nadam(learning_rate=0.2) var1 = tf.Variable(10.0) loss = lambda: (var1 ** 2) / 2.0 step_count = opt.minimize(loss, [var1]).numpy() "{:.1f}".format(var1.numpy()) 9.8
Reference
Dozat, 2015.
Adagrad
Adagrad class
tf.keras.optimizers.Adagrad(
learning_rate=0.001,
initial_accumulator_value=0.1,
epsilon=1e-07,
name="Adagrad",
**kwargs
)
Optimizer that implements the Adagrad algorithm.
Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the updates.
Arguments
learning_rate: Initial value for the learning rate: either a floating point value, or a tf.keras.optimizers.schedules.LearningRateSchedule instance. Defaults to 0.001. Note that Adagrad tends to benefit from higher initial learning rate values compared to other optimizers. To match the exact form in the original paper, use 1.0.
initial_accumulator_value: Floating point value. Starting value for the accumulators (per-parameter momentum values). Must be non-negative.
epsilon: Small floating point value used to maintain numerical stability.
name: Optional name prefix for the operations created when applying gradients. Defaults to "Adagrad".
**kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm and represents the maximum L2 norm of each weight variable; "clipvalue" (float) clips gradient by value and represents the maximum absolute value of each weight variable.
Reference
Duchi et al., 2011.TerminateOnNaN
TerminateOnNaN class
tf.keras.callbacks.TerminateOnNaN()
Callback that terminates training when a NaN loss is encountered.
ProgbarLogger
ProgbarLogger class
tf.keras.callbacks.ProgbarLogger(count_mode="samples", stateful_metrics=None)
Callback that prints metrics to stdout.
Arguments
count_mode: One of "steps" or "samples". Whether the progress bar should count samples seen or steps (batches) seen.
stateful_metrics: Iterable of string names of metrics that should not be averaged over an epoch. Metrics in this list will be logged as-is. All others will be averaged over time (e.g. loss, etc). If not provided, defaults to the Model's metrics.
Raises
ValueError: In case of invalid count_mode.
ModelCheckpoint
ModelCheckpoint class
tf.keras.callbacks.ModelCheckpoint(
filepath,
monitor="val_loss",
verbose=0,
save_best_only=False,
save_weights_only=False,
mode="auto",
save_freq="epoch",
options=None,
**kwargs
)
Callback to save the Keras model or model weights at some frequency.
ModelCheckpoint callback is used in conjunction with training using model.fit() to save a model or weights (in a checkpoint file) at some interval, so the model or weights can be loaded later to continue the training from the state saved.
A few options this callback provides include:
Whether to only keep the model that has achieved the "best performance" so far, or whether to save the model at the end of every epoch regardless of performance.
Definition of 'best'; which quantity to monitor and whether it should be maximized or minimized.
The frequency it should save at. Currently, the callback supports saving at the end of every epoch, or after a fixed number of training batches.
Whether only weights are saved, or the whole model is saved.
Note: If you get WARNING:tensorflow:Can save best model only with <name> available, skipping see the description of the monitor argument for details on how to get this right.
Example
model.compile(loss=..., optimizer=...,
metrics=['accuracy'])
EPOCHS = 10
checkpoint_filepath = '/tmp/checkpoint'
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=True,
monitor='val_accuracy',
mode='max',
save_best_only=True)
# Model weights are saved at the end of every epoch, if it's the best seen
# so far.
model.fit(epochs=EPOCHS, callbacks=[model_checkpoint_callback])
# The model weights (that are considered the best) are loaded into the model.
model.load_weights(checkpoint_filepath)
Arguments
filepath: string or PathLike, path to save the model file. e.g. filepath = os.path.join(working_dir, 'ckpt', file_name). filepath can contain named formatting options, which will be filled the value of epoch and keys in logs (passed in on_epoch_end). For example: if filepath is weights.{epoch:02d}-{val_loss:.2f}.hdf5, then the model checkpoints will be saved with the epoch number and the validation loss in the filename. The directory of the filepath should not be reused by any other callbacks to avoid conflicts.