text
stringlengths 0
4.99k
|
---|
run_training(epochs=1) |
# Calling the same function again will resume from where we left off |
run_training(epochs=1) |
W0829 16:55:03.609519 4592479680 cross_device_ops.py:1115] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce. |
Creating a new model |
W0829 16:55:03.708506 4592479680 callbacks.py:1270] Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the `keras.callbacks.experimental.BackupAndRestore` callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback. |
1563/1563 - 4s - loss: 0.2242 - sparse_categorical_accuracy: 0.9321 - val_loss: 0.1243 - val_sparse_categorical_accuracy: 0.9647 |
W0829 16:55:07.981292 4592479680 cross_device_ops.py:1115] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce. |
Restoring from ./ckpt/ckpt-1 |
W0829 16:55:08.245935 4592479680 callbacks.py:1270] Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the `keras.callbacks.experimental.BackupAndRestore` callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback. |
1563/1563 - 4s - loss: 0.0948 - sparse_categorical_accuracy: 0.9709 - val_loss: 0.1006 - val_sparse_categorical_accuracy: 0.9699 |
tf.data performance tips |
When doing distributed training, the efficiency with which you load data can often become critical. Here are a few tips to make sure your tf.data pipelines run as fast as possible. |
Note about dataset batching |
When creating your dataset, make sure it is batched with the global batch size. For instance, if each of your 8 GPUs is capable of running a batch of 64 samples, you call use a global batch size of 512. |
Calling dataset.cache() |
If you call .cache() on a dataset, its data will be cached after running through the first iteration over the data. Every subsequent iteration will use the cached data. The cache can be in memory (default) or to a local file you specify. |
This can improve performance when: |
Your data is not expected to change from iteration to iteration |
You are reading data from a remote distributed filesystem |
You are reading data from local disk, but your data would fit in memory and your workflow is significantly IO-bound (e.g. reading & decoding image files). |
Calling dataset.prefetch(buffer_size) |
You should almost always call .prefetch(buffer_size) after creating a dataset. It means your data pipeline will run asynchronously from your model, with new samples being preprocessed and stored in a buffer while the current batch samples are used to train the model. The next batch will be prefetched in GPU memory by the time the current batch is over. |
Multi-worker distributed synchronous training |
How it works |
In this setup, you have multiple machines (called workers), each with one or several GPUs on them. Much like what happens for single-host training, each available GPU will run one model replica, and the value of the variables of each replica is kept in sync after each batch. |
Importantly, the current implementation assumes that all workers have the same number of GPUs (homogeneous cluster). |
How to use it |
Set up a cluster (we provide pointers below). |
Set up an appropriate TF_CONFIG environment variable on each worker. This tells the worker what its role is and how to communicate with its peers. |
On each worker, run your model construction & compilation code within the scope of a MultiWorkerMirroredStrategy object, similarly to we did for single-host training. |
Run evaluation code on a designated evaluator machine. |
Setting up a cluster |
First, set up a cluster (collective of machines). Each machine individually should be setup so as to be able to run your model (typically, each machine will run the same Docker image) and to able to access your data source (e.g. GCS). |
Cluster management is beyond the scope of this guide. Here is a document to help you get started. You can also take a look at Kubeflow. |
Setting up the TF_CONFIG environment variable |
While the code running on each worker is almost the same as the code used in the single-host workflow (except with a different tf.distribute strategy object), one significant difference between the single-host workflow and the multi-worker workflow is that you need to set a TF_CONFIG environment variable on each machine running in your cluster. |
The TF_CONFIG environment variable is a JSON string that specifies: |
The cluster configuration, while the list of addresses & ports of the machines that make up the cluster |
The worker's "task", which is the role that this specific machine has to play within the cluster. |
One example of TF_CONFIG is: |
os.environ['TF_CONFIG'] = json.dumps({ |
'cluster': { |
'worker': ["localhost:12345", "localhost:23456"] |
}, |
'task': {'type': 'worker', 'index': 0} |
}) |
In the multi-worker synchronous training setup, valid roles (task types) for the machines are "worker" and "evaluator". |
For example, if you have 8 machines with 4 GPUs each, you could have 7 workers and one evaluator. |
The workers train the model, each one processing sub-batches of a global batch. |
One of the workers (worker 0) will serve as "chief", a particular kind of worker that is responsible for saving logs and checkpoints for later reuse (typically to a Cloud storage location). |
The evaluator runs a continuous loop that loads the latest checkpoint saved by the chief worker, runs evaluation on it (asynchronously from the other workers) and writes evaluation logs (e.g. TensorBoard logs). |
Running code on each worker |
You would run training code on each worker (including the chief) and evaluation code on the evaluator. |
The training code is basically the same as what you would use in the single-host setup, except using MultiWorkerMirroredStrategy instead of MirroredStrategy. |
Each worker would run the same code (minus the difference explained in the note below), including the same callbacks. |
Note: Callbacks that save model checkpoints or logs should save to a different directory for each worker. It is standard practice that all workers should save to local disk (which is typically temporary), except worker 0, which would save TensorBoard logs checkpoints to a Cloud storage location for later access & reuse. |
The evaluator would simply use MirroredStrategy (since it runs on a single machine and does not need to communicate with other machines) and call model.evaluate(). It would be loading the latest checkpoint saved by the chief worker to a Cloud storage location, and would save evaluation logs to the same location as the chief logs. |
Example: code running in a multi-worker setup |
On the chief (worker 0): |
# Set TF_CONFIG |
os.environ['TF_CONFIG'] = json.dumps({ |
'cluster': { |
'worker': ["localhost:12345", "localhost:23456"] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.