Published on Thu Aug 08 2019

Uncheatable Machine Learning Inference

Mustafa Canim, Ashish Kundu, Josh Payne

Classification-as-a-Service (CaaS) is widely deployed today in machine intelligence stacks. A CaaS provider may cheat a customer by fraudulently bypassing expensive training procedures in favor of weaker, less-computationally-intensive algorithms. We propose a variety of methods for to evaluate the service claims made

0
0
0
Abstract

Classification-as-a-Service (CaaS) is widely deployed today in machine intelligence stacks for a vastly diverse set of applications including anything from medical prognosis to computer vision tasks to natural language processing to identity fraud detection. The computing power required for training complex models on large datasets to perform inference to solve these problems can be very resource-intensive. A CaaS provider may cheat a customer by fraudulently bypassing expensive training procedures in favor of weaker, less computationally-intensive algorithms which yield results of reduced quality. Given a classification service supplier , intermediary CaaS provider claiming to use as a classification backend, and customer , our work addresses the following questions: (i) how can 's claim to be using be verified by ? (ii) how might make performance guarantees that may be verified by ? and (iii) how might one design a decentralized system that incentivizes service proofing and accountability? To this end, we propose a variety of methods for to evaluate the service claims made by using probabilistic performance metrics, instance seeding, and steganography. We also propose a method of measuring the robustness of a model using a blackbox adversarial procedure, which may then be used as a benchmark or comparison to a claim made by . Finally, we propose the design of a smart contract-based decentralized system that incentivizes service accountability to serve as a trusted Quality of Service (QoS) auditor.

Mon Dec 09 2019
Artificial Intelligence
Machine Unlearning
SISA training is a framework that expedites the unlearning process by limiting the influence of a data point in the training procedure. SISA training reduces the computational overhead associated with unlearning, even in the worst-case setting.
3
13
27
Fri Sep 09 2016
Machine Learning
Stealing Machine Learning Models via Prediction APIs
Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. In such attacks, an adversary with no prior knowledge of an ML model's parameters or training data aims to duplicate the functionality of the model.
0
0
0
Mon May 18 2020
Artificial Intelligence
An Overview of Privacy in Machine Learning
Google, Microsoft, and Amazon have started to provide customers with access to software interfaces allowing them to embed machine learning tasks into their applications. If malicious users were able to recover data used to train these models, the resulting information leakage would create serious issues. If the inner parameters of the model are considered proprietary information, then access to
0
0
0
Wed Aug 01 2018
Artificial Intelligence
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
MLCapsule is a guarded offline deployment of machine learning as a service. It executes the model locally on the user's side and therefore the data never leaves the client. It offers the service provider the same level of control and security of its model.
0
0
0
Thu Nov 28 2019
Machine Learning
Computer Systems Have 99 Problems, Let's Not Make Machine Learning Another One
Machine learning techniques are finding many applications in computer systems. We believe machine learning systems are here to stay. To materialize on their potential we need to take a fresh look at various key issues.
0
0
0
Mon Mar 13 2017
Machine Learning
Blocking Transferability of Adversarial Examples in Black-Box Learning Systems
Advances in Machine Learning (ML) have led to its adoption as an integral component in many applications, including banking, medical diagnosis, and driverless cars. ML classifiers are vulnerable to adversarial examples: inputs that are maliciously modified can cause the classifier to provide adversary-desired outputs.
0
0
0