Published on Mon Jan 28 2019

Bayesian Differential Privacy for Machine Learning

Aleksei Triastcyn, Boi Faltings

Traditional differential privacy is independent of the data distribution. This is not well-matched with the modern machine learning context, where models are trained on specific data. We propose Bayesian differential privacy (BDP) to provide more practical privacy guarantees.

0
0
0
Abstract

Traditional differential privacy is independent of the data distribution. However, this is not well-matched with the modern machine learning context, where models are trained on specific data. As a result, achieving meaningful privacy guarantees in ML often excessively reduces accuracy. We propose Bayesian differential privacy (BDP), which takes into account the data distribution to provide more practical privacy guarantees. We also derive a general privacy accounting method under BDP, building upon the well-known moments accountant. Our experiments demonstrate that in-distribution samples in classic machine learning datasets, such as MNIST and CIFAR-10, enjoy significantly stronger privacy guarantees than postulated by DP, while models maintain high classification accuracy.

Fri Dec 07 2018
Machine Learning
Three Tools for Practical Differential Privacy
Differentially private learning on real-world data poses challenges for standard machine learning practice. Privacy guarantees are difficult to interpret, hyperparameter tuning on private data reduces the privacy budget. Ad-hoc privacy attacks are often required to test model privacy.
0
0
0
Thu Dec 05 2019
Machine Learning
Element Level Differential Privacy: The Right Granularity of Privacy
Differential Privacy (DP) provides strong guarantees on the risk of compromising a user's data in statistical learning applications. We propose element level differential privacy, which extends DP to provide protection against leaking information about any particular "element" a user has.
0
0
0
Wed Dec 24 2014
Machine Learning
Differential Privacy and Machine Learning: a Survey and Review
The objective of machine learning is to extract useful information from data, while privacy is preserved by concealing information. We explore the interplay between machine learning and differential privacy. We also describe some theoretical results that address what can be learned differentially privately.
0
0
0
Mon Jun 07 2021
Machine Learning
Antipodes of Label Differential Privacy: PATE and ALIBI
0
0
0
Mon Sep 06 2021
Machine Learning
Statistical Privacy Guarantees of Machine Learning Preprocessing Techniques
Differential privacy provides strong privacy guarantees for machine learning applications. Resampling techniques used when dealing with imbalanced datasets cause the resultant model to leak more privacy.
0
0
0
Sun May 26 2019
Machine Learning
Automatic Discovery of Privacy-Utility Pareto Fronts
This paper presents a Bayesian optimization methodology for efficiently characterizing the privacy--utility trade-off of any differently private algorithm. The versatility of our method is illustrated on a number of machine learning tasks involving multiple models, optimizers, and datasets.
0
0
0