text
stringlengths 0
4.99k
|
---|
Epoch 3/5 |
938/938 [==============================] - 6s 7ms/step - loss: 0.0385 - sparse_categorical_accuracy: 0.9883 |
Epoch 4/5 |
938/938 [==============================] - 6s 7ms/step - loss: 0.0330 - sparse_categorical_accuracy: 0.9895 |
Epoch 5/5 |
938/938 [==============================] - 6s 7ms/step - loss: 0.0255 - sparse_categorical_accuracy: 0.9916 |
<tensorflow.python.keras.callbacks.History at 0x7f9fb82bbf40> |
Let's save the model in GCS after the training is complete. |
save_path = os.path.join("gs://", gcp_bucket, "mnist_example") |
if tfc.remote(): |
model.save(save_path) |
We can also use this storage bucket for Docker image building, instead of your local Docker instance. For this, just add your bucket to the docker_image_bucket_name parameter. |
# docs_infra: no_execute |
tfc.run(docker_image_bucket_name=gcp_bucket) |
After training the model, we can load the saved model and view our TensorBoard logs to monitor performance. |
# docs_infra: no_execute |
model = keras.models.load_model(save_path) |
!#docs_infra: no_execute |
!tensorboard dev upload --logdir "gs://keras-examples-jonah/logs/fit" --name "Guide MNIST" |
Large-scale projects |
In many cases, your project containing a Keras model may encompass more than one Python script, or may involve external data or specific dependencies. TensorFlow Cloud is entirely flexible for large-scale deployment, and provides a number of intelligent functionalities to aid your projects. |
Entry points: support for Python scripts and Jupyter notebooks |
Your call to the run() API won't always be contained inside the same Python script as your model training code. For this purpose, we provide an entry_point parameter. The entry_point parameter can be used to specify the Python script or notebook in which your model training code lives. When calling run() from the same script as your model, use the entry_point default of None. |
pip dependencies |
If your project calls on additional pip dependencies, it's possible to specify the additional required libraries by including a requirements.txt file. In this file, simply put a list of all the required dependencies and TensorFlow Cloud will handle integrating these into your cloud build. |
Python notebooks |
TensorFlow Cloud is also runnable from Python notebooks. Additionally, your specified entry_point can be a notebook if needed. There are two key differences to keep in mind between TensorFlow Cloud on notebooks compared to scripts: |
When calling run() from within a notebook, a Cloud Storage bucket must be specified for building and storing your Docker image. |
GCloud authentication happens entirely through your authentication key, without project specification. An example workflow using TensorFlow Cloud from a notebook is provided in the "Putting it all together" section of this guide. |
Multi-file projects |
If your model depends on additional files, you only need to ensure that these files live in the same directory (or subdirectory) of the specified entry point. Every file that is stored in the same directory as the specified entry_point will be included in the Docker image, as well as any files stored in subdirectories adjacent to the entry_point. This is also true for dependencies you may need which can't be acquired through pip |
For an example of a custom entry-point and multi-file project with additional pip dependencies, take a look at this multi-file example on the TensorFlow Cloud Repository. For brevity, we'll just include the example's run() call: |
tfc.run( |
docker_image_bucket_name=gcp_bucket, |
entry_point="train_model.py", |
requirements="requirements.txt" |
) |
Machine configuration and distributed training |
Model training may require a wide range of different resources, depending on the size of the model or the dataset. When accounting for configurations with multiple GPUs, it becomes critical to choose a fitting distribution strategy. Here, we outline a few possible configurations: |
Multi-worker distribution |
Here, we can use COMMON_MACHINE_CONFIGS to designate 1 chief CPU and 4 worker GPUs. |
tfc.run( |
docker_image_bucket_name=gcp_bucket, |
chief_config=tfc.COMMON_MACHINE_CONFIGS['CPU'], |
worker_count=2, |
worker_config=tfc.COMMON_MACHINE_CONFIGS['T4_4X'] |
) |
By default, TensorFlow Cloud chooses the best distribution strategy for your machine configuration with a simple formula using the chief_config, worker_config and worker_count parameters provided. |
If the number of GPUs specified is greater than zero, tf.distribute.MirroredStrategy will be chosen. |
If the number of workers is greater than zero, tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.experimental.TPUStrategy will be chosen based on the accelerator type. |
Otherwise, tf.distribute.OneDeviceStrategy will be chosen. |
TPU distribution |
Let's train the same model on TPU, as shown: |
tfc.run( |
docker_image_bucket_name=gcp_bucket, |
chief_config=tfc.COMMON_MACHINE_CONFIGS["CPU"], |
worker_count=1, |
worker_config=tfc.COMMON_MACHINE_CONFIGS["TPU"] |
) |
Custom distribution strategy |
To specify a custom distribution strategy, format your code normally as you would according to the distributed training guide and set distribution_strategy to None. Below, we'll specify our own distribution strategy for the same MNIST model. |
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() |
mirrored_strategy = tf.distribute.MirroredStrategy() |
with mirrored_strategy.scope(): |
model = create_model() |
if tfc.remote(): |
epochs = 100 |
batch_size = 128 |
else: |
epochs = 10 |
batch_size = 64 |
callbacks = None |
model.fit( |
x_train, y_train, epochs=epochs, callbacks=callbacks, batch_size=batch_size |
) |
tfc.run( |
docker_image_bucket_name=gcp_bucket, |
chief_config=tfc.COMMON_MACHINE_CONFIGS['CPU'], |
worker_count=2, |
worker_config=tfc.COMMON_MACHINE_CONFIGS['T4_4X'], |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.