Concepedia

TLDR

Distributed training is bottlenecked by high network communication costs for synchronizing gradients and parameters. The authors propose TernGrad, a method that uses ternary gradients to accelerate distributed deep learning and introduce a performance model to study its scalability. TernGrad compresses gradients to three levels {–1, 0, 1}, employs layer‑wise ternarizing and gradient clipping guided by a bounded‑gradient assumption, and includes a scalability performance model. Convergence of TernGrad is mathematically proven, and experiments show no accuracy loss on AlexNet (even improvement), less than 2 % loss on GoogLeNet, and significant speed gains across various deep neural networks. Source code is available at the provided link.

Abstract

High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {-1,0,1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn't incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available1.

References

YearCitations

Page 1