Publication | Open Access
Fast distributed coordinate descent for non-strongly convex losses
28
Citations
6
References
2014
Year
Unknown Venue
Mathematical ProgrammingComputational ScienceEngineeringMachine LearningDistributed Coordinate DescentConvex OptimizationDistributed OptimizationConvex Loss FunctionsParallel LearningLarge Scale OptimizationConvergence RateInverse ProblemsComputer ScienceRegularization (Mathematics)Approximation TheoryLasso Optimization ProblemAdaptive OptimizationLinear Optimization
We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions. The method attains the optimal O(1/k <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> ) convergence rate, where k is the iteration counter. The core of the work is the theoretical study of stepsize parameters. We have implemented the method on Archer - the largest super-computer in the UK-and show that the method is capable of solving a (synthetic) LASSO optimization problem with 50 billion variables.
| Year | Citations | |
|---|---|---|
Page 1
Page 1