Publication | Closed Access
Distributed Subgradient Methods for Convex Optimization Over Random Networks
367
Citations
22
References
2010
Year
Mathematical ProgrammingNetwork ScienceMachine LearningEngineeringStochastic OptimizationSubgradient MethodsDistributed Subgradient MethodConvex OptimizationConvex FunctionsNetwork AnalysisDistributed Constraint OptimizationLarge Scale OptimizationDistributed Ai SystemComputer ScienceDistributed LearningNetwork OptimizationMulti-agent OptimizationCombinatorial Optimization
We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works on multi-agent optimization that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide almost sure convergence results for our subgradient algorithm.
| Year | Citations | |
|---|---|---|
Page 1
Page 1