Publication | Closed Access
Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
1.1K
Citations
22
References
2011
Year
EngineeringMachine LearningData ScienceDistributed AlgorithmsMost GradientStochastic Gradient DescentParallel LearningComputer EngineeringLock-free ApproachLarge Scale OptimizationParallel ProgrammingComputer ScienceDistributed LearningParallel ComputingDeep LearningData-level ParallelismUpdate SchemeAdaptive Optimization
Stochastic Gradient Descent is widely used for high‑performance machine‑learning tasks, yet existing parallelization approaches rely on memory locking that degrades performance. The study proposes a lock‑free update scheme, HOGWILD!, to enable parallel SGD without memory locking. HOGWILD! permits processors to write concurrently to shared memory, tolerating overwrites among updates.
Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.
| Year | Citations | |
|---|---|---|
Page 1
Page 1