23.8 C
New York

Explicit AI deepfakes of Taylor Swift cause outrage

Published:

A wave of outrage has swept through both fans and lawmakers due to a series of explicit AI-generated deepfake images of Taylor Swift circulating on social media, as revealed by VentureBeat.

These images depict the 2024 Times Person Of The Year partaking in explicit sexual acts with supporters of the NFL team, the Kansas City Chiefs, which happens to be the team of her boyfriend, Travis Kelce.

Swift’s loyal fanbase has swiftly come to her defense on social platforms using the hashtag #ProtectTaylorSwift, while efforts are being made to block the dissemination of such content as new accounts continue to repost the images. Consequently, there is mounting pressure on US legislators to take action against the rapidly advancing generative AI industry.

The specific AI tools responsible for crafting these deepfakes of Taylor Swift remain unknown at this time. While several platforms like MidJourney and OpenAI’s DALL-E3 prohibit the generation of sexually explicit or suggestive content, 404 Media asserts that the culprits utilized Microsoft’s AI technology, powered by DALL-E3, to produce these objectionable images.

A Twitter account linked to the creation of some of these images, @Zvbear, has reportedly come forward, as per Newsweek, and subsequently made their account private.

Efforts to Regulate Deepfake Content Creation

In light of Taylor Swift’s reported displeasure with the spread of these specific deepfake images on social media, US lawmakers are facing mounting calls to regulate the underlying technology.

Tom Kean Jr, a Republican Congressman representing New Jersey, released a statement urging Congress to consider and pass two bills he introduced to oversee AI technology.

In his statement, Kean emphasized the urgency of establishing regulations to combat this concerning trend, emphasizing the need for safeguards to protect individuals like Taylor Swift and young people nationwide.

One of Kean’s proposed bills, the AI Labeling Act, mandates that AI multimedia companies include a conspicuous notice on their outputs stating they are AI-generated content. Though this step is significant, its effectiveness in preventing the initial creation of such harmful images remains uncertain.

Notably, Meta has implemented a similar practice for images generated using its Image AI art tool, while OpenAI has pledged to introduce AI image credentials.

Featured Image: Photo by Rosa Rafael on Unsplash

Related articles

Recent articles