Researchers find vulnerabilities in popular art protection tools

Researchers find vulnerabilities in popular art protection tools
Webp ubcklmgboqt42ofdx3pvkk1gkowp
Professor Deborah Prentice, Vice-Chancellor | University Of Cambridge

Artists are facing challenges in protecting their work from unauthorized use by AI models, according to researchers who have identified weaknesses in two popular art protection tools. These tools, Glaze and NightShade, were designed to safeguard artists' creations against generative AI's potential misuse.

The tools add subtle distortions to digital images to confuse AI models during training. Glaze takes a passive approach, while NightShade actively disrupts the learning process. Despite these measures, researchers have developed a method called LightShed that can bypass these protections by detecting and removing the distortions.

Developed by researchers at the University of Cambridge, Technical University Darmstadt, and the University of Texas at San Antonio, LightShed will be presented at the USENIX Security Symposium in August. The tool has demonstrated high accuracy in identifying and neutralizing protections on images.

“This shows that even when using tools like NightShade, artists are still at risk of their work being used for training AI models without their consent,” stated Hanna Foerster from Cambridge’s Department of Computer Science and Technology.

The researchers emphasize that LightShed is not an attack but a call to action for developing better defenses. Professor Ahmad-Reza Sadeghi expressed the goal of collaborating with other scientists to support artists with more robust protection strategies.

As AI technology evolves rapidly, issues regarding generative AI and copyright continue to surface. Notable cases include Getty Images suing Stability AI over alleged copyright infringement and Disney and Universal suing Midjourney for plagiarism concerns.

Hanna Foerster added, “What we hope to do with our work is to highlight the urgent need for a roadmap towards more resilient, artist-centred protection strategies.”

Foerster is affiliated with Darwin College, Cambridge. Her research paper on LightShed will be featured at the upcoming security symposium.

Related