Concepedia

Publication | Open Access

Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication

15

Citations

21

References

2019

Year

TLDR

Large deep neural networks trained on ever‑growing data corpora make distributed training increasingly relevant, yet limited communication bandwidth and high costs pose a major challenge. The authors propose Sparse Binary Compression (SBC) to drastically reduce communication costs in distributed training. SBC merges communication delay, gradient sparsification, a novel binarization technique, and optimal weight‑update encoding, enabling adjustable trade‑offs between gradient and temporal sparsity. Experiments show SBC can cut upstream communication by over four orders of magnitude while preserving convergence speed, reducing ResNet50 training on ImageNet to 3.35 GB per client from 125 TB with only a 1 % accuracy drop.

Abstract

Currently, progressively larger deep neural networks are trained on ever growing data corpora. In result, distributed training schemes are becoming increasingly relevant. A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general. To mitigate this problem we propose Sparse Binary Compression (SBC), a compression framework that allows for a drastic reduction of communication cost for distributed training. SBC combines existing techniques of communication delay and gradient sparsification with a novel binarization method and optimal weight update encoding to push compression gains to new limits. By doing so, our method also allows us to smoothly trade-off gradient sparsity and temporal sparsity to adapt to the requirements of the learning task. Our experiments show, that SBC can reduce the upstream communication on a variety of convolutional and recurrent neural network architectures by more than four orders of magnitude without significantly harming the convergence speed in terms of forward-backward passes. For instance, we can train ResNet50 on ImageNet in the same number of iterations to the baseline accuracy, using ×3531 less bits or train it to a 1% lower accuracy using ×37208 less bits. In the latter case, the total upstream communication required is cut from 125 terabytes to 3.35 gigabytes for every participating client.

References

YearCitations

Page 1