Publication | Open Access
Deep Learning with Differential Privacy
5.5K
Citations
55
References
2016
Year
Unknown Venue
Neural‑network models achieve remarkable results across domains but rely on large, often crowdsourced datasets that may contain sensitive information, so they must not expose private data. We develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation demonstrates that deep neural networks with non‑convex objectives can be trained under a modest privacy budget with manageable software complexity, training efficiency, and model quality.
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
| Year | Citations | |
|---|---|---|
Page 1
Page 1