text
stringlengths 0
4.99k
|
---|
}, |
'task': {'type': 'worker', 'index': 0} |
}) |
# Open a strategy scope and create/restore the model. |
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() |
with strategy.scope(): |
model = make_or_restore_model() |
callbacks = [ |
# This callback saves a SavedModel every 100 batches |
keras.callbacks.ModelCheckpoint(filepath='path/to/cloud/location/ckpt', |
save_freq=100), |
keras.callbacks.TensorBoard('path/to/cloud/location/tb/') |
] |
model.fit(train_dataset, |
callbacks=callbacks, |
...) |
On other workers: |
# Set TF_CONFIG |
worker_index = 1 # For instance |
os.environ['TF_CONFIG'] = json.dumps({ |
'cluster': { |
'worker': ["localhost:12345", "localhost:23456"] |
}, |
'task': {'type': 'worker', 'index': worker_index} |
}) |
# Open a strategy scope and create/restore the model. |
# You can restore from the checkpoint saved by the chief. |
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() |
with strategy.scope(): |
model = make_or_restore_model() |
callbacks = [ |
keras.callbacks.ModelCheckpoint(filepath='local/path/ckpt', save_freq=100), |
keras.callbacks.TensorBoard('local/path/tb/') |
] |
model.fit(train_dataset, |
callbacks=callbacks, |
...) |
On the evaluator: |
strategy = tf.distribute.MirroredStrategy() |
with strategy.scope(): |
model = make_or_restore_model() # Restore from the checkpoint saved by the chief. |
results = model.evaluate(val_dataset) |
# Then, log the results on a shared location, write TensorBoard logs, etc |
Further reading |
TensorFlow distributed training guide |
Tutorial on multi-worker training with Keras |
MirroredStrategy docs |
MultiWorkerMirroredStrategy docs |
Distributed training in tf.keras with Weights & BiasesTraining Keras models with TensorFlow Cloud |
Author: Jonah Kohn |
Date created: 2020/08/11 |
Last modified: 2020/08/11 |
Description: In-depth usage guide for TensorFlow Cloud. |
View in Colab • GitHub source |
Introduction |
TensorFlow Cloud is a Python package that provides APIs for a seamless transition from local debugging to distributed training in Google Cloud. It simplifies the process of training TensorFlow models on the cloud into a single, simple function call, requiring minimal setup and no changes to your model. TensorFlow Cloud handles cloud-specific tasks such as creating VM instances and distribution strategies for your models automatically. This guide will demonstrate how to interface with Google Cloud through TensorFlow Cloud, and the wide range of functionality provided within TensorFlow Cloud. We'll start with the simplest use-case. |
Setup |
We'll get started by installing TensorFlow Cloud, and importing the packages we will need in this guide. |
!pip install -q tensorflow_cloud |
import tensorflow as tf |
import tensorflow_cloud as tfc |
from tensorflow import keras |
from tensorflow.keras import layers |
API overview: a first end-to-end example |
Let's begin with a Keras model training script, such as the following CNN: |
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() |
model = keras.Sequential( |
[ |
keras.Input(shape=(28, 28)), |
# Use a Rescaling layer to make sure input values are in the [0, 1] range. |
layers.experimental.preprocessing.Rescaling(1.0 / 255), |
# The original images have shape (28, 28), so we reshape them to (28, 28, 1) |
layers.Reshape(target_shape=(28, 28, 1)), |
# Follow-up with a classic small convnet |
layers.Conv2D(32, 3, activation="relu"), |
layers.MaxPooling2D(2), |
layers.Conv2D(32, 3, activation="relu"), |
layers.MaxPooling2D(2), |
layers.Conv2D(32, 3, activation="relu"), |
layers.Flatten(), |
layers.Dense(128, activation="relu"), |
layers.Dense(10), |
] |
) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.