Publication | Open Access
A delayed proximal gradient method with linear convergence rate
49
Citations
8
References
2014
Year
Unknown Venue
Numerical AnalysisMathematical ProgrammingLinear Convergence RateEngineeringContinuous OptimizationComputer EngineeringSmooth Component FunctionsDelayed Partial GradientsConvergence RateInverse ProblemsComputer ScienceDerivative-free OptimizationLarge Scale OptimizationNondifferentiable OptimizationApproximation TheorySignal ProcessingConvergence AnalysisAdaptive Optimization
This paper presents a new incremental gradient algorithm for minimizing the average of a large number of smooth component functions based on delayed partial gradients. Even with a constant step size, which can be chosen independently of the maximum delay bound and the number of objective function components, the expected objective value is guaranteed to converge linearly to within some ball around the optimum. We derive an explicit expression that quantifies how the convergence rate depends on objective function properties and algorithm parameters such as step-size and the maximum delay. An associated upper bound on the asymptotic error reveals the trade-off between convergence speed and residual error. Numerical examples confirm the validity of our results.
| Year | Citations | |
|---|---|---|
Page 1
Page 1