Published on Mon Dec 30 2019

Differentially Private M-band Wavelet-Based Mechanisms in Machine Learning Environments

Kenneth Choi, Tony Lee

In the post-industrial world, data science and analytics have gained paramount importance regarding digital data privacy. Many researchers have been developing high-level privacy-preserving mechanisms that also retain the statistical integrity of the data to apply to machine learning.

0
0
0
Abstract

In the post-industrial world, data science and analytics have gained paramount importance regarding digital data privacy. Improper methods of establishing privacy for accessible datasets can compromise large amounts of user data even if the adversary has a small amount of preliminary knowledge of a user. Many researchers have been developing high-level privacy-preserving mechanisms that also retain the statistical integrity of the data to apply to machine learning. Recent developments of differential privacy, such as the Laplace and Privelet mechanisms, drastically decrease the probability that an adversary can distinguish the elements in a data set and thus extract user information. In this paper, we develop three privacy-preserving mechanisms with the discrete M-band wavelet transform that embed noise into data. The first two methods (LS and LS+) add noise through a Laplace-Sigmoid distribution that multiplies Laplace-distributed values with the sigmoid function, and the third method utilizes pseudo-quantum steganography to embed noise into the data. We then show that our mechanisms successfully retain both differential privacy and learnability through statistical analysis in various machine learning environments.

Wed Dec 24 2014
Machine Learning
Differential Privacy and Machine Learning: a Survey and Review
The objective of machine learning is to extract useful information from data, while privacy is preserved by concealing information. We explore the interplay between machine learning and differential privacy. We also describe some theoretical results that address what can be learned differentially privately.
0
0
0
Tue Aug 20 2019
Machine Learning
AdaCliP: Adaptive Clipping for Private SGD
Machine learning algorithms are crucial for learning models over user data to protect sensitive information. Motivated by this, differentially private stochastic gradient descent (SGD) algorithms have been proposed. At each step, these algorithms modify the gradients and add noise proportional to the sensitivity of the gradient.
0
0
0
Mon Jan 28 2019
Machine Learning
Bayesian Differential Privacy for Machine Learning
Traditional differential privacy is independent of the data distribution. This is not well-matched with the modern machine learning context, where models are trained on specific data. We propose Bayesian differential privacy (BDP) to provide more practical privacy guarantees.
0
0
0
Fri Dec 07 2018
Machine Learning
Three Tools for Practical Differential Privacy
Differentially private learning on real-world data poses challenges for standard machine learning practice. Privacy guarantees are difficult to interpret, hyperparameter tuning on private data reduces the privacy budget. Ad-hoc privacy attacks are often required to test model privacy.
0
0
0
Thu Dec 05 2019
Machine Learning
Element Level Differential Privacy: The Right Granularity of Privacy
Differential Privacy (DP) provides strong guarantees on the risk of compromising a user's data in statistical learning applications. We propose element level differential privacy, which extends DP to provide protection against leaking information about any particular "element" a user has.
0
0
0
Tue Mar 27 2018
Machine Learning
Privacy Preserving Machine Learning: Threats and Solutions
For privacy concerns to be addressed adequately in current machine learning systems, the knowledge gap between the machine learning and privacy communities must be bridged. This article aims to provide an introduction to the intersection of both fields with special emphasis on the techniques used to protect the data.
0
0
0