As the election season approaches, Google and YouTube are closely monitoring the use of AI-altered political advertisements, a growing concern as political campaigns intensify their use of generative AI.
In an update to Google's political content policy, it is now required that any advertising materials featuring "synthetic" or artificially altered individuals, voices, or other events must include a clear disclosure within the advertisement itself.
While Google already prohibits the use of deepfake content in advertising, the expanded disclosure rules now encompass any AI alterations that go beyond minor edits, as reported by The Washington Post. The policy excludes synthetic content alterations that are "inconsequential to the ad's claims." AI can still be used for video and photo editing, such as image resizing, cropping, color correction, defect correction, or background edits.
The intersection of political ads and Big Tech is becoming a significant aspect of the upcoming 2024 election. Elon Musk recently announced that X (formerly Twitter) will once again permit political ads from candidates and political parties, reversing a four-year-old ban on all political ads. This decision comes as platform users report an increase in unlabeled advertisements in their feeds.
A September report from Media Matters for America revealed that Meta platforms are not effectively enforcing the company's political ad policy, with unlabeled right-wing advertisements appearing on Facebook and Instagram.
Google's new policy will take effect in November and will apply to election ads on Google's platforms, including YouTube and third-party sites within the company's ad network.