Deepfakes share some similarities with virus mutations, as the most "impressive" or "effective" ones tend to be profoundly disconcerting. As generative AI became widespread last year with the proliferation of text-generating chatbots, social media platforms were inundated with algorithmically generated images and sounds, further blurring the lines of reality.
On the positive side, some deepfakes are clearly not intended to deceive anyone. Some even exhibit a sense of whimsy. Regrettably, the ones that go viral often exist in a morally gray area.
Here are some deepfakes from the first half of 2023 that garnered attention due to their unsettling realism. Enjoy them as entertainment, but please refrain from using them to propagate hoaxes. The world is already filled with enough falsehoods.
While the world eagerly awaited the arrest of former President Donald Trump, journalist and Bellingcat founder Eliot Higgins decided to unleash his creativity. Utilizing the AI image generator Midjourney, Higgins crafted images depicting Trump engaged in various activities, including resisting arrest, evading the NYPD, and partaking in prison life while clad in an orange jumpsuit.
In actuality, Trump turned himself in to law enforcement on April 20 and was spared a mugshot by the Manhattan District Attorney's office. Thus, those hoping for a real-life rendition of the AI-generated fantasy were met with the relatively uneventful reality of Trump's arraignment.
When an image of Pope Francis sporting a massive white puffer jacket went viral in March, the internet reveled in the Pope's stylish wardrobe upgrade. Regrettably, it was a fabrication. The image of the Pope in a Balenciaga-style puffer jacket was created by a 31-year-old construction worker using Midjourney.
However, the image was deceptively realistic, fooling numerous individuals, including Chrissy Teigen, who tweeted, "I thought the Pope’s puffer jacket was real and didn't give it a second thought. No way am I surviving the future of technology." We hear you, Chrissy.
Curiously, Balenciaga made an appearance twice in the same month, this time with Harry Potter characters receiving the Balenciaga treatment courtesy of Midjourney. In a video titled "Harry Potter by Balenciaga" created by Demonflyingfox, all the main characters are transformed into fashion models with striking features and intense expressions, reminiscent of a fashion campaign with a touch of self-seriousness. The intention was never to deceive people into thinking it was real, but the video skillfully captures the tone, aesthetics, and portrayal of Harry Potter characters as Balenciaga models, highlighting the capabilities of tools like Midjourney.
In April 2023, The Weeknd and Drake released a blazing hit called "Heart on My Sleeve." Except they didn't. It was created by an anonymous artist named Ghostwriter using AI. The song gained notoriety not only for its catchiness but also due to the intricate copyright issues raised by generative AI in the music industry.
While it remains unclear which technology was used to produce the song, creating audio deepfakes is surprisingly straightforward. Numerous tools are available that use text-to-speech or existing audio clips to essentially clone someone's voice and make it say whatever you desire.
The song was eventually removed from YouTube, Spotify, Apple Music, and other streaming platforms due to copyright infringement. In a statement to Billboard, Universal Music Group, the record label for The Weeknd and Drake, declared:
The use of generative AI to train using our artists’ music (a clear breach of our agreements and copyright law) and the presence of infringing content generated with generative AI on digital service providers (DSPs) forces all stakeholders in the music ecosystem to choose their stance: either supporting artists, fans, and human creative expression, or indulging deepfakes, fraud, and depriving artists of their rightful compensation.
In contrast to the relatively harmless Balenciaga Pope, the false image of an explosion near the Pentagon illustrates the potential danger posed by generative AI in the wrong hands. In May, an image depicting a fire and billowing smoke from an apparent explosion near the Pentagon circulated on Twitter.
Law enforcement swiftly debunked the image as a deepfake, but it had genuine repercussions, causing a brief dip in the stock market. The presence of blurred fencing in front of the building and irregular column sizes indicated that the image was either AI-generated or digitally manipulated. As generative AI becomes more sophisticated, spotting deepfakes will become increasingly challenging.