Data poisoning is a particularly worrisome subset of poisoning attacks where the attacker aims to cause a Denial-of-Service (DoS) attack. We propose a counter-intuitive but efficient heuristic.
One of the most concerning threats for modern AI systems is data poisoning,
where the attacker injects maliciously crafted training data to corrupt the
system's behavior at test time. Availability poisoning is a particularly
worrisome subset of poisoning attacks where the attacker aims to cause a
Denial-of-Service (DoS) attack. However, the state-of-the-art algorithms are
computationally expensive because they try to solve a complex bi-level
optimization problem (the "hammer"). We observed that in particular conditions,
namely, where the target model is linear (the "nut"), the usage of
computationally costly procedures can be avoided. We propose a
counter-intuitive but efficient heuristic that allows contaminating the
training set such that the target system's performance is highly compromised.
We further suggest a re-parameterization trick to decrease the number of
variables to be optimized. Finally, we demonstrate that, under the considered
settings, our framework achieves comparable, or even better, performances in
terms of the attacker's objective while being significantly more