Concepedia

Publication | Open Access

Accelerating SLIDE Deep Learning on Modern CPUs: Vectorization,\n Quantizations, Memory Optimizations, and More

16

Citations

6

References

2021

Year

Abstract

Deep learning implementations on CPUs (Central Processing Units) are gaining\nmore traction. Enhanced AI capabilities on commodity x86 architectures are\ncommercially appealing due to the reuse of existing hardware and virtualization\nease. A notable work in this direction is the SLIDE system. SLIDE is a C++\nimplementation of a sparse hash table based back-propagation, which was shown\nto be significantly faster than GPUs in training hundreds of million parameter\nneural models. In this paper, we argue that SLIDE's current implementation is\nsub-optimal and does not exploit several opportunities available in modern\nCPUs. In particular, we show how SLIDE's computations allow for a unique\npossibility of vectorization via AVX (Advanced Vector Extensions)-512.\nFurthermore, we highlight opportunities for different kinds of memory\noptimization and quantizations. Combining all of them, we obtain up to 7x\nspeedup in the computations on the same hardware. Our experiments are focused\non large (hundreds of millions of parameters) recommendation and NLP models.\nOur work highlights several novel perspectives and opportunities for\nimplementing randomized algorithms for deep learning on modern CPUs. We provide\nthe code and benchmark scripts at https://github.com/RUSH-LAB/SLIDE\n

References

YearCitations

Page 1