text
stringlengths 0
4.99k
|
---|
model.compile( |
optimizer=keras.optimizers.Adam(), |
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), |
metrics=keras.metrics.SparseCategoricalAccuracy(), |
) |
model.fit(x_train, y_train, epochs=20, batch_size=128, validation_split=0.1) |
To train this model on Google Cloud we just need to add a call to run() at the beginning of the script, before the imports: |
tfc.run() |
You don’t need to worry about cloud-specific tasks such as creating VM instances and distribution strategies when using TensorFlow Cloud. The API includes intelligent defaults for all the parameters -- everything is configurable, but many models can rely on these defaults. |
Upon calling run(), TensorFlow Cloud will: |
Make your Python script or notebook distribution-ready. |
Convert it into a Docker image with required dependencies. |
Run the training job on a GCP GPU-powered VM. |
Stream relevant logs and job information. |
The default VM configuration is 1 chief and 0 workers with 8 CPU cores and 1 Tesla T4 GPU. |
Google Cloud configuration |
In order to facilitate the proper pathways for Cloud training, you will need to do some first-time setup. If you're a new Google Cloud user, there are a few preliminary steps you will need to take: |
Create a GCP Project; |
Enable AI Platform Services; |
Create a Service Account; |
Download an authorization key; |
Create a Cloud Storage bucket. |
Detailed first-time setup instructions can be found in the TensorFlow Cloud README, and an additional setup example is shown on the TensorFlow Blog. |
Common workflows and Cloud storage |
In most cases, you'll want to retrieve your model after training on Google Cloud. For this, it's crucial to redirect saving and loading to Cloud Storage while training remotely. We can direct TensorFlow Cloud to our Cloud Storage bucket for a variety of tasks. The storage bucket can be used to save and load large training datasets, store callback logs or model weights, and save trained model files. To begin, let's configure fit() to save the model to a Cloud Storage, and set up TensorBoard monitoring to track training progress. |
def create_model(): |
model = keras.Sequential( |
[ |
keras.Input(shape=(28, 28)), |
layers.experimental.preprocessing.Rescaling(1.0 / 255), |
layers.Reshape(target_shape=(28, 28, 1)), |
layers.Conv2D(32, 3, activation="relu"), |
layers.MaxPooling2D(2), |
layers.Conv2D(32, 3, activation="relu"), |
layers.MaxPooling2D(2), |
layers.Conv2D(32, 3, activation="relu"), |
layers.Flatten(), |
layers.Dense(128, activation="relu"), |
layers.Dense(10), |
] |
) |
model.compile( |
optimizer=keras.optimizers.Adam(), |
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), |
metrics=keras.metrics.SparseCategoricalAccuracy(), |
) |
return model |
Let's save the TensorBoard logs and model checkpoints generated during training in our cloud storage bucket. |
import datetime |
import os |
# Note: Please change the gcp_bucket to your bucket name. |
gcp_bucket = "keras-examples" |
checkpoint_path = os.path.join("gs://", gcp_bucket, "mnist_example", "save_at_{epoch}") |
tensorboard_path = os.path.join( # Timestamp included to enable timeseries graphs |
"gs://", gcp_bucket, "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S") |
) |
callbacks = [ |
# TensorBoard will store logs for each epoch and graph performance for us. |
keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1), |
# ModelCheckpoint will save models after each epoch for retrieval later. |
keras.callbacks.ModelCheckpoint(checkpoint_path), |
# EarlyStopping will terminate training when val_loss ceases to improve. |
keras.callbacks.EarlyStopping(monitor="val_loss", patience=3), |
] |
model = create_model() |
Here, we will load our data from Keras directly. In general, it's best practice to store your dataset in your Cloud Storage bucket, however TensorFlow Cloud can also accomodate datasets stored locally. That's covered in the Multi-file section of this guide. |
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() |
The TensorFlow Cloud API provides the remote() function to determine whether code is being executed locally or on the cloud. This allows for the separate designation of fit() parameters for local and remote execution, and provides means for easy debugging without overloading your local machine. |
if tfc.remote(): |
epochs = 100 |
callbacks = callbacks |
batch_size = 128 |
else: |
epochs = 5 |
batch_size = 64 |
callbacks = None |
model.fit(x_train, y_train, epochs=epochs, callbacks=callbacks, batch_size=batch_size) |
Epoch 1/5 |
938/938 [==============================] - 6s 7ms/step - loss: 0.2021 - sparse_categorical_accuracy: 0.9383 |
Epoch 2/5 |
938/938 [==============================] - 6s 7ms/step - loss: 0.0533 - sparse_categorical_accuracy: 0.9836 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.