Humans Strike Back: Artists Use Tools to ‘Poison’ AI Copycats

bottle of poison
ADragan/Getty

A new data “poisoning” tool is reportedly giving artists the ability to fight back against generative AI, causing artificial intelligence training data to become  scrambled when tech giants scrape their original work. As one artist explains, “It is going to make [AI companies] think twice, because they have the possibility of destroying their entire model by taking our work without our consent.”

The tool, called Nightshade, “poisons” AI training data in ways that could cause “serious damage” to image-generating AI models, as it could harm future iterations of the technology by rendering it outputs useless, according to new research obtained by MIT Technology Review.

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

Sundar Pichai talks about AI

Sundar Pichai, chief executive officer of Alphabet Inc., during the Google I/O Developers Conference in Mountain View, California, US, on Wednesday, May 10, 2023.  Photographer: David Paul Morris/Bloomberg

Nightshade allows artists to “add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways,” the report noted.

The tool, for example, can cause AI technology to inadvertently turn dogs into cats while its scraping content in an attempt to learn.

University of Chicago professor Ben Zhao, who led the team that created Nightshade, says he hopes the tool will give artists power when it comes to AI technology trying to scrape their copyright and intellectual property.

Zhao’s team also developed a tool called Glaze, which lets artists “mask” their own personal style in order to prevent it from being scraped by AI companies, MIT Technology Review reported.

Glaze works similarly to Nightshade, in the sense that it changes the pixels of images in subtle ways that are invisible to the human eye, but damaging to AI training models when they try to interpret the image by showing it as something different from what it actually is.

The team is also reportedly making Nightshade open source, meaning that people will have the ability to toy with it and potentially develop their own versions of the tool. Zhao added that the more people use the tool and create their own versions, the more powerful the tool will become.

Meanwhile, the more “poisoned” images that are scraped into data sets for large AI models — which can consist of billions of images — the more damage they will cause to AI learning.

Professor Zhao admitted that there is a risk people could abuse the poisoning tool for malicious purposes, but said that people would need thousands of poisoned samples in order to cause real damage.

Others believe that tools like Nightshade and Glaze will serve as a powerful deterrent against AI companies, prompting the companies to ask for permission from artists before scraping their content, and even be willing to pay out royalties to creators.

“It is going to make [AI companies] think twice, because they have the possibility of destroying their entire model by taking our work without our consent,” Eva Toorenent, an illustrator and artist who has used Glaze, said.

You can follow Alana Mastrangelo on Facebook and X/Twitter at @ARmastrangelo, and on Instagram.

COMMENTS

Please let us know if you're having issues with commenting.