Google announced on Wednesday that it will soon mandate disclosures for political advertisements on its platforms. Starting in November, advertisers will be required to disclose when images and audio in their ads have been altered or created using artificial intelligence (AI). This change comes as concerns grow over the potential misuse of generative AI to deceive voters, especially with the upcoming US presidential election.
Google’s spokesperson stated, “For years we’ve provided additional levels of transparency for election ads. Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated.”
Earlier this year, a campaign video attacking former President Donald Trump, created by Ron DeSantis, was found to include images that were altered using AI. Google’s ad policies already prohibit the manipulation of digital media to mislead or deceive people about political matters and social issues.
In addition to disclosing digitally altered content, Google’s current ad policies require political ads to disclose their sponsors, and the company also maintains an online ads library that provides information about the messages. The upcoming update will further require election-related ads to prominently disclose the presence of “synthetic content” that portrays real or realistic-looking individuals or events. Google reassured that it continues to invest in technology to detect and remove such content.
According to Google, any disclosures of digitally altered content in election ads must be clear and conspicuous, and placed in a location where they are likely to be noticed. Examples provided by Google for labels include “This image does not depict real events” or “This video content was synthetically generated.”