Publication | Closed Access
Distributed Autonomous Online Learning: Regrets and Intrinsic Privacy-Preserving Properties
264
Citations
13
References
2013
Year
Artificial IntelligencePrivacy ProtectionAutonomous Online LearningEngineeringMachine LearningInformation SecurityOnline LearningData SourcesData ScienceDecision TheoryAutonomous LearningOnline AlgorithmConvex FunctionsData PrivacyComputer ScienceDistributed LearningDifferential PrivacyPrivacyData SecurityCryptographyPrivacy PreservationFederated Learning
Online learning is popular for massive data, but its sequential nature typically requires a centralized learner to store data and update parameters. The study investigates online learning with distributed data sources. Autonomous learners update local parameters from local data and periodically exchange information with a small subset of neighbors in a communication network. We derive regret bounds for strongly convex and convex functions, generalizing prior work, and prove that the algorithm’s intrinsic privacy‑preserving properties hold under sufficient and necessary conditions, ensuring that in networks with connectivity greater than one a malicious learner cannot reconstruct other learners’ subgradients or raw data.
Online learning has become increasingly popular on handling massive data. The sequential nature of online learning, however, requires a centralized learner to store data and update parameters. In this paper, we consider online learning with distributed data sources. The autonomous learners update local parameters based on local data sources and periodically exchange information with a small subset of neighbors in a communication network. We derive the regret bound for strongly convex functions that generalizes the work by Ram et al. for convex functions. More importantly, we show that our algorithm has intrinsic privacy-preserving properties, and we prove the sufficient and necessary conditions for privacy preservation in the network. These conditions imply that for networks with greater-than-one connectivity, a malicious learner cannot reconstruct the subgradients (and sensitive raw data) of other learners, which makes our algorithm appealing in privacy-sensitive applications.
| Year | Citations | |
|---|---|---|
Page 1
Page 1