Concepedia

Publication | Open Access

Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction

218

Citations

26

References

2016

Year

TLDR

Deep learning has rapidly expanded into industry and academia, yet its widespread use raises significant privacy concerns that have received limited scientific attention. This study introduces the deep private auto‑encoder (dPA) to address privacy preservation in auto‑encoders. dPA enforces ε‑differential privacy by perturbing the objective functions of a traditional deep auto‑encoder and is applied to human behavior prediction in a health social network. Theoretical analysis and extensive experiments demonstrate that dPA is highly effective and efficient, outperforming existing solutions.

Abstract

In recent years, deep learning has spread beyond both academia and industry with many exciting real-world applications. The development of deep learning has presented obvious privacy issues. However, there has been lack of scientific study about privacy preservation in deep learning. In this paper, we concentrate on the auto-encoder, a fundamental component in deep learning, and propose the deep private auto-encoder (dPA). Our main idea is to enforce ε-differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results. We apply the dPA to human behavior prediction in a health social network. Theoretical analysis and thorough experimental evaluations show that the dPA is highly effective and efficient, and it significantly outperforms existing solutions.

References

YearCitations

2006

16.2K

2014

8.3K

1989

7.2K

2009

6.8K

2002

4.9K

2014

3.3K

2015

2.2K

2007

1.4K

2014

1.1K

1989

983

Page 1