Making deepfakes cheaper and easier thanks to AI
Comedian-turned-podcaster Joe Rogan’s endorsement of a “sex-boosting” coffee brand for men isn’t entirely out of character.
But when a video recently circulated on TikTok showing Mr. Rogan and his guest, Andrew Huberman, peddling coffee, some eagle-eyed viewers — including Dr. Huberman — were shocked.
“Yes, that’s fake,” said Dr. Huberman wrote on twitter After seeing the ad, he appeared to extol coffee’s testosterone-boosting potential, even though he never did.
The ad is one of a growing number of fake videos on social media using artificial intelligence technology. Experts say Mr Rogan’s voice appears to have been synthesized using artificial intelligence tools that mimic the voices of celebrities. Dr. Huberman’s comments are excerpted from an unrelated interview.
Creating realistically fake videos, often called deepfakes, once required elaborate software to place one person’s face on another’s faceBut now, everyday consumers have access to many of the tools to create them — even on smartphone apps, and often for little or no money.
Newly edited videos — so far, mostly the work of meme makers and marketers — have gone viral on social media sites like TikTok and Twitter.the content they produce, sometimes Called cheap fakes by researcherswhich works by cloning the voices of celebrities, changing mouth movements to match alternate audio, and writing convincing dialogue.
These videos, and the accessible technology behind them, have some AI researchers worry about their dangerand raised fresh concerns about whether social media companies are poised to moderate growing digital fakery.
Disinformation watchdogs are also bracing for a wave of digital disinformation that could trick viewers or make it harder to know what’s true online.
“The difference is that now everybody can do it,” said Britt Paris, an assistant professor of library and information science at Rutgers University who helped coin the term “cheap fake.” “It’s not just for people with complex computing skills and a fairly sophisticated knowledge of computing. Instead, it’s a free app.”
Spread of misinformation and lies
- cut the expenses: Layoffs in the social media industry reflect a trend that threatens Withdrawal of many safeguards The platform is already in place to ban or suppress disinformation.
- Key case: The outcome of the federal court lawsuit could help determine whether the First Amendment will thwart nearly any government effort to curb disinformation.
- Top misinformation purveyors: Steve Bannon’s “War Room” podcast has more lies and unsubstantiated claims than other political talk shows, a large study has found.
- artificial intelligence: AI-generated personas have been spotted for the first time in a state-aligned disinformation campaign, opening a new chapter in online manipulation.
Plenty of manipulated content has circulated on TikTok and elsewhere over the years, often using more modest tricks like careful editing or swapping one audio clip for another. In a video on TikTok, Vice President Kamala Harris appears to say that everyone hospitalized with Covid-19 has been vaccinated.In fact, she said Patient is not vaccinated.
Graphika, a research firm that studies disinformation, discovered a deepfake of a fictional news anchor distributed by a pro-China bot account late last year, the first known example of the technology being used in a state coalition influence campaign.
But some new tools offer similar techniques to everyday Internet users, giving comedians and partisans the chance to craft their own convincing spoofs.
Last month, a fake video circulated showing President Biden announcing a national draft for a war between Russia and Ukraine. The video was produced by the team behind Human Events Daily, a podcast and livestream run by Jack Posobiec, a right-wing influencer known for spreading conspiracy theories.
In a section explaining the video, Mr Posobiec said his team created it using artificial intelligence techniques. The conservative account The Patriot Oasis tweeted about the video using the Breaking News hashtag, but did not indicate that the video was fake. The tweet was viewed more than eight million times.
Many of the video clips featuring synthesized speech appear to use technology from ElevenLabs, an American startup co-founded by former Google engineers. In November, the company unveiled a voice-cloning tool that can be trained to duplicate voices in seconds.
ElevenLabs drew attention last month after 4chan, a message board known for its racist and conspiracy theory content, used the tool to share hateful messages. In one example, a 4chan user made a recording of an anti-Semitic text using a computer-generated voice impersonating actress Emma Watson. motherboard 4chan’s use of audio technology was reported earlier.
ElevenLabs said on Twitter that it will Introducing new safeguards, such as restricting voice cloning to paid accounts and offering new AI detection tools. But 4chan users say they will use open source code to create their own version of voice cloning technology, posting demos that sound similar to audio produced by ElevenLabs.
“We want to have custom AI with creative capabilities,” wrote an anonymous 4chan user in a thread about the project.
An ElevenLabs spokeswoman said in an email that the company is seeking to collaborate with other AI developers to create a common detection system that could be adopted across the industry.
Videos of cloned voices made using ElevenLabs’ tools or similar techniques have gone viral in recent weeks. One of the tweets posted by site owner Elon Musk shows a fake conversation full of profanity between Mr Rogan, Mr Musk and Canadian men’s rights activist Jordan Peterson . In another photo posted on YouTube, Mr Rogan appears to be giving a fake version of an interview with Canadian Prime Minister Justin Trudeau about his political scandal.
“Creating such counterfeits should be a crime punishable by 10 years in prison,” Mr Peterson said in a tweet about a fake video featuring his voice. “This technology is incredibly dangerous.”
In a statement, a YouTube spokesperson said Mr Rogan and Mr Trudeau’s video did not violate the platform’s policies because it “provide enough context” (The creator described it as a “fake video.”) The company says its Misinformation Policy Prohibited content altered in a misleading manner.
Experts who study deepfakes say the fake ads featuring Mr. Rogan and Dr. Huberman were likely created using voice cloning procedures, though the exact tools used are unclear.Mr. Logan’s audio is spliced into real interview Dr. Huberman discusses testosterone.
The results were not perfect. Mr. Rogan’s video is an excerpt from an unrelated interview with professional pool player Fedor Gorst published in December. Mr. Rogan’s mouth movements don’t match the audio, and his voice sounds unnatural at times. It’s hard to tell if the video won over TikTok users: It attracted more attention after being flagged for its impressive fakery.
TikTok’s policy prohibits digital forgery “by misrepresenting the truth of an event in order to mislead users and cause significant harm to the subject of the video, other people, or society.” Some of the videos were removed after The New York Times reported it to the company. Twitter also removed some videos.
A TikTok spokesman said the company uses a “combination of technology and human moderation to detect and remove” manipulated videos, but declined to elaborate on its methods.
Mr. Rogan and the companies featured in the false advertisements did not respond to requests for comment.
Many social media companies, including Meta and Twitch, have banned deepfakes and manipulated videos to trick users.Meta, which owns Facebook and Instagram, held a competition in 2021 to develop programs that can spot deepfakes, leading to a tool This can spot them 83% of the time.
Federal regulators have been slow to respond.A federal law from 2019 Demand a report on the weaponization of deepfakes by foreigners, require government agencies to notify Congress if deepfakes target U.S. elections, and create an award to encourage research on tools that can detect deepfakes.
“We can’t wait two years to pass a law,” said Ravit Dotan, a postdoctoral researcher at the University of Pittsburgh’s Collaborative AI Responsibility Lab. “By then, the loss may be too great. There is an election coming up in the US and it will be a problem.”