Political Ads Using AI Must Be Labeled, FCC Chair Says - Decrypt

Political Ads Using AI Must Be Labeled, FCC Chair Says – Decrypt

Source Node: 2581537

If artificial intelligence is tapped to create a political advertisement, its use must be disclosed, according to a new proposal issued by the U.S. Federal Communications Commission. The FCC notice, published on Wednesday, comes nearly three months after an AI-generated robocall targeted voters in New Hampshire.

Under the FCC proposal, political ads would require an on-air disclosure and a written disclosure kept on file by broadcasters whenever AI-generated content is included.

“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” FCC Chair Jessica Rosenworcel said in a statement, saying that consumers have a right to know when AI is used in the political content they see or hear.

The disclosure rules would apply to both candidate and issue advertisements and entities that offer “origination programming,” or programming produced or acquired by a license for transmission to subscribers, including cable, satellite TV, and radio providers.

Apart from the disclosure, the proposed policy does not outright ban AI-generated content. But the agency has taken similar actions in the past.

In February, the FCC banned the use of AI-generated robocalls after an audio deepfake of U.S. President Joe Biden attempted to trick New Hampshire residents into not voting in the state’s primary election in February. Already the subject of previous AI-generated deepfakes, Biden called for the ban of AI voice impersonation during the State of the Union address in March.

But while Biden called for the banning of AI voice impersonators, Congressional Candidate for Ohio’s 7th district Matt Diemer partnered with AI developer Civox AI to leverage the technology to engage with voters.

“System like Civox allows me to put my voice out there to people,” Diemer previously told Decrypt. “That would be over 730,000 citizens throughout the state.”

“It’s no different than sending out blogs, emails, text messages, TikToks, or tweets,” he said. “This is another way for people to interact with me and have more of a connection.”

Diemer, who was a periodic host on Decrypt’s once-daily GM podcast, previously differentiated his candidacy through his support of crypto—making AI only the latest emerging technology added to his toolbox.

Generative AI model developers, including Microsoft, OpenAI, Meta, Anthropic, and Google have already restricted or banned the use of their large language model platforms in being used for political ads.

“In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses,” a Google spokesperson previously told Decrypt.

Looking to the U.S. elections this fall and beyond, the FCC emphasized the need to stay vigilant against deceptive AI-generated deepfakes.

“The use of AI is expected to play a substantial role in the creation of political ads in 2024 and beyond, but the use of AI-generated content in political ads also creates a potential for providing deceptive information to voters, in particular, the potential use of ‘deepfakes’—altered images, videos, or audio recordings that depict people doing or saying things that did not actually do or say, or events that did not actually occur,” the agency said.

The FCC did not immediately respond to a request for comment from Decrypt.

Edited by Ryan Ozawa.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Time Stamp:

More from Decrypt