MIT researchers create new tool to fight AI image manipulation

Technology

The tool, called PhotoGuard, makes small changes to images that are imperceptible to the human eye but nevertheless hamper AI-generated alterations.

Researchers at MIT are working to combat AI-based image manipulation. Cody O’Loughlin/The New York Times

As artificial intelligence technology continues to advance at a rapid pace, post-apocalyptic depictions of killer robots may seem more and more realistic. But AI does not have to reach the level of true human-like consciousness to wreak havoc. 

In a world where media literacy is low and trust in institutions is lacking, simply altering an image via AI could have devastating repercussions. A team of researchers at MIT are hoping to prevent this with a new tool that uses AI to combat the proliferation of fake, AI-altered images. 

The tool, called PhotoGuard, is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney. To do this, it injects “imperceptible adversarial perturbations” into real images. These changes are not visible with the human eye, but they stick out like a sore thumb to the AI image generators. 

The image generators could cause chaos in a number of ways. A fake image of catastrophic events could heavily sway public opinion, or alterations to personal images could be used as blackmail.

“The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation,” Hadi Salman, lead author of the new paper about PhotoGuard, told MIT News.

PhotoGuard, developed at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), prevents alterations using two methods. 

The first is known as an “encoder” attack, according to MIT News. AI models “see” images as masses of complicated data points that describe the location and color of every pixel. This method minorly adjusts a real image’s mathematical representation as seen by an AI model. As a result, the AI interprets the image as a “random entity.” Alteration becomes nearly impossible, but the changes made by PhotoGuard are not visible to the human eye and the image retains its quality. 

The second method, called a “diffusion” attack, is more complicated and requires more computational horsepower, according to MIT News. This also relies on making imperceptible changes to a real image, but a “diffusion” attack makes the changes so that an AI generator incorrectly perceives the image entered as a different image. Thus, its attempts at alteration are ineffective.  

The team at MIT said that their creation is no cure-all, and called for collaboration between all stakeholders involved, including the companies pioneering the use of AI for image alterations. 

“A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” Salman told MIT News. “And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools.”

Originally posted 2023-08-16 13:31:32.

Related Posts