text
stringlengths 0
4.99k
|
---|
self.dense = tf.keras.layers.Dense(10) |
def call(self, x): |
outputs = self.dense(x) |
tf.summary.histogram('outputs', outputs) |
return outputs |
model = MyModel() |
model.compile('sgd', 'mse') |
# Make sure to set `update_freq=N` to log a batch-level summary every N batches. |
# In addition to any `tf.summary` contained in `Model.call`, metrics added in |
# `Model.compile` will be logged every N batches. |
tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1) |
model.fit(x_train, y_train, callbacks=[tb_callback]) |
Custom batch-level summaries in a Functional API Model: |
def my_summary(x): |
tf.summary.histogram('x', x) |
return x |
inputs = tf.keras.Input(10) |
x = tf.keras.layers.Dense(10)(inputs) |
outputs = tf.keras.layers.Lambda(my_summary)(x) |
model = tf.keras.Model(inputs, outputs) |
model.compile('sgd', 'mse') |
# Make sure to set `update_freq=N` to log a batch-level summary every N batches. |
# In addition to any `tf.summary` contained in `Model.call`, metrics added in |
# `Model.compile` will be logged every N batches. |
tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1) |
model.fit(x_train, y_train, callbacks=[tb_callback]) |
Profiling: |
# Profile a single batch, e.g. the 5th batch. |
tensorboard_callback = tf.keras.callbacks.TensorBoard( |
log_dir='./logs', profile_batch=5) |
model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) |
# Profile a range of batches, e.g. from 10 to 20. |
tensorboard_callback = tf.keras.callbacks.TensorBoard( |
log_dir='./logs', profile_batch=(10,20)) |
model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])EarlyStopping |
EarlyStopping class |
tf.keras.callbacks.EarlyStopping( |
monitor="val_loss", |
min_delta=0, |
patience=0, |
verbose=0, |
mode="auto", |
baseline=None, |
restore_best_weights=False, |
) |
Stop training when a monitored metric has stopped improving. |
Assuming the goal of a training is to minimize the loss. With this, the metric to be monitored would be 'loss', and mode would be 'min'. A model.fit() training loop will check at end of every epoch whether the loss is no longer decreasing, considering the min_delta and patience if applicable. Once it's found no longer decreasing, model.stop_training is marked True and the training terminates. |
The quantity to be monitored needs to be available in logs dict. To make it so, pass the loss or metrics at model.compile(). |
Arguments |
monitor: Quantity to be monitored. |
min_delta: Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement. |
patience: Number of epochs with no improvement after which training will be stopped. |
verbose: verbosity mode. |
mode: One of {"auto", "min", "max"}. In min mode, training will stop when the quantity monitored has stopped decreasing; in "max" mode it will stop when the quantity monitored has stopped increasing; in "auto" mode, the direction is automatically inferred from the name of the monitored quantity. |
baseline: Baseline value for the monitored quantity. Training will stop if the model doesn't show improvement over the baseline. |
restore_best_weights: Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. An epoch will be restored regardless of the performance relative to the baseline. If no epoch improves on baseline, training will run for patience epochs and restore weights from the best epoch in that set. |
Example |
>>> callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) |
>>> # This callback will stop the training when there is no improvement in |
>>> # the loss for three consecutive epochs. |
>>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) |
>>> model.compile(tf.keras.optimizers.SGD(), loss='mse') |
>>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), |
... epochs=10, batch_size=1, callbacks=[callback], |
... verbose=0) |
>>> len(history.history['loss']) # Only 4 epochs are run. |
4 |
RemoteMonitor |
RemoteMonitor class |
tf.keras.callbacks.RemoteMonitor( |
root="http://localhost:9000", |
path="/publish/epoch/end/", |
field="data", |
headers=None, |
send_as_json=False, |
) |
Callback used to stream events to a server. |
Requires the requests library. Events are sent to root + '/publish/epoch/end/' by default. Calls are HTTP POST, with a data argument which is a JSON-encoded dictionary of event data. If send_as_json=True, the content type of the request will be "application/json". Otherwise the serialized JSON will be sent within a form. |
Arguments |
root: String; root url of the target server. |
path: String; path relative to root to which the events will be sent. |
field: String; JSON field under which the data will be stored. The field is used only if the payload is sent within a form (i.e. send_as_json is set to False). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.