text
stringlengths 0
4.99k
|
---|
on_batch_end=None, |
on_train_begin=None, |
on_train_end=None, |
**kwargs |
) |
Callback for creating simple, custom callbacks on-the-fly. |
This callback is constructed with anonymous functions that will be called at the appropriate time (during Model.{fit | evaluate | predict}). Note that the callbacks expects positional arguments, as: |
on_epoch_begin and on_epoch_end expect two positional arguments: epoch, logs |
on_batch_begin and on_batch_end expect two positional arguments: batch, logs |
on_train_begin and on_train_end expect one positional argument: logs |
Arguments |
on_epoch_begin: called at the beginning of every epoch. |
on_epoch_end: called at the end of every epoch. |
on_batch_begin: called at the beginning of every batch. |
on_batch_end: called at the end of every batch. |
on_train_begin: called at the beginning of model training. |
on_train_end: called at the end of model training. |
Example |
# Print the batch number at the beginning of every batch. |
batch_print_callback = LambdaCallback( |
on_batch_begin=lambda batch,logs: print(batch)) |
# Stream the epoch loss to a file in JSON format. The file content |
# is not well-formed JSON but rather has a JSON object per line. |
import json |
json_log = open('loss_log.json', mode='wt', buffering=1) |
json_logging_callback = LambdaCallback( |
on_epoch_end=lambda epoch, logs: json_log.write( |
json.dumps({'epoch': epoch, 'loss': logs['loss']}) + '\n'), |
on_train_end=lambda logs: json_log.close() |
) |
# Terminate some processes after having finished model training. |
processes = ... |
cleanup_callback = LambdaCallback( |
on_train_end=lambda logs: [ |
p.terminate() for p in processes if p.is_alive()]) |
model.fit(..., |
callbacks=[batch_print_callback, |
json_logging_callback, |
cleanup_callback]) |
TensorBoard |
TensorBoard class |
tf.keras.callbacks.TensorBoard( |
log_dir="logs", |
histogram_freq=0, |
write_graph=True, |
write_images=False, |
write_steps_per_second=False, |
update_freq="epoch", |
profile_batch=2, |
embeddings_freq=0, |
embeddings_metadata=None, |
**kwargs |
) |
Enable visualizations for TensorBoard. |
TensorBoard is a visualization tool provided with TensorFlow. |
This callback logs events for TensorBoard, including: |
Metrics summary plots |
Training graph visualization |
Activation histograms |
Sampled profiling |
When used in Model.evaluate, in addition to epoch summaries, there will be a summary that records evaluation metrics vs Model.optimizer.iterations written. The metric names will be prepended with evaluation, with Model.optimizer.iterations being the step in the visualized TensorBoard. |
If you have installed TensorFlow with pip, you should be able to launch TensorBoard from the command line: |
tensorboard --logdir=path_to_your_logs |
You can find more information about TensorBoard here. |
Arguments |
log_dir: the path of the directory where to save the log files to be parsed by TensorBoard. e.g. log_dir = os.path.join(working_dir, 'logs') This directory should not be reused by any other callbacks. |
histogram_freq: frequency (in epochs) at which to compute activation and weight histograms for the layers of the model. If set to 0, histograms won't be computed. Validation data (or split) must be specified for histogram visualizations. |
write_graph: whether to visualize the graph in TensorBoard. The log file can become quite large when write_graph is set to True. |
write_images: whether to write model weights to visualize as image in TensorBoard. |
write_steps_per_second: whether to log the training steps per second into Tensorboard. This supports both epoch and batch frequency logging. |
update_freq: 'batch' or 'epoch' or integer. When using 'batch', writes the losses and metrics to TensorBoard after each batch. The same applies for 'epoch'. If using an integer, let's say 1000, the callback will write the metrics and losses to TensorBoard every 1000 batches. Note that writing too frequently to TensorBoard can slow down your training. |
profile_batch: Profile the batch(es) to sample compute characteristics. profile_batch must be a non-negative integer or a tuple of integers. A pair of positive integers signify a range of batches to profile. By default, it will profile the second batch. Set profile_batch=0 to disable profiling. |
embeddings_freq: frequency (in epochs) at which embedding layers will be visualized. If set to 0, embeddings won't be visualized. |
embeddings_metadata: a dictionary which maps layer name to a file name in which metadata for this embedding layer is saved. See the details about metadata files format. In case if the same metadata file is used for all embedding layers, string can be passed. |
Examples |
Basic usage: |
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs") |
model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) |
# Then run the tensorboard command to view the visualizations. |
Custom batch-level summaries in a subclassed Model: |
class MyModel(tf.keras.Model): |
def build(self, _): |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.