Abstract
Breadth-First Pipeline Parallelism optimizes training by combining high GPU utilization, small batch sizes, and fully sharded data parallelism, resulting in increased throughput and reduced training time and cost.
We introduce Breadth-First Pipeline Parallelism, a novel training schedule which optimizes the combination of pipeline and data parallelism. Breadth-First Pipeline Parallelism lowers training time, cost and memory usage by combining a high GPU utilization with a small batch size per GPU, and by making use of fully sharded data parallelism. Experimentally, we observed an increase of up to 43% in training throughput for a 52 billion-parameter model using a small batch size per GPU compared to Megatron-LM, which would reduce the training time and cost by the same amount on a large GPU cluster.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper