Concepedia

TLDR

Deep learning neural‑network recommendation models have emerged as key tools for personalization, yet their handling of categorical features remains under‑studied. The authors develop a state‑of‑the‑art deep learning recommendation model (DLRM) and release its implementation in PyTorch and Caffe2. DLRM employs a specialized parallelization scheme that applies model parallelism to embedding tables to reduce memory usage while using data parallelism for fully‑connected layers to scale compute. DLRM’s performance on the Big Basin AI platform demonstrates its usefulness as a benchmark for future algorithmic experimentation and system co‑design.

Abstract

With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for tackling personalization and recommendation tasks. These networks differ significantly from other deep learning networks due to their need to handle categorical features and are not well studied or understood. In this paper, we develop a state-of-the-art deep learning recommendation model (DLRM) and provide its implementation in both PyTorch and Caffe2 frameworks. In addition, we design a specialized parallelization scheme utilizing model parallelism on the embedding tables to mitigate memory constraints while exploiting data parallelism to scale-out compute from the fully-connected layers. We compare DLRM against existing recommendation models and characterize its performance on the Big Basin AI platform, demonstrating its usefulness as a benchmark for future algorithmic experimentation and system co-design.

References

YearCitations

Page 1