
Last year, Andrei Doronichev was shocked when he saw a video on social media that appeared to show the Ukrainian president surrendering to Russia.
The video was quickly debunked as a synthetically generated deepfake, but for Mr Doronichev it was a worrisome omen. This year, his fears have come closer to reality, as companies race to enhance and release artificial intelligence technology despite the havoc it can wreak.
Generative AI is now available to anyone, and it is increasingly capable of fooling people with text, audio, images, and video that appear to have been conceived and captured by humans. The risk of social credulity raises concerns about disinformation, unemployment, discrimination, privacy, and widespread dystopia.
For entrepreneurs like Mr Doronichev, it has also become a business opportunity. There are now a dozen companies offering tools to identify whether something was made with artificial intelligence, under names such as Sensity AI (deepfake detection), Fictitious.AI (plagiarism detection), and Originality.AI (also known as plagiarism).
Doronichev, a native Russian, founded a San Francisco-based company called Optic to help identify synthetic or spoofed material—an “airport X-ray machine for digital content,” as he puts it.
March, revealed a website Where users can examine images to see if they are made from real photos or artificial intelligence. It is developing other services to verify video and audio.
“The authenticity of content will become a major issue for society as a whole,” said Mr. Doronichev, an investor in a face-swapping app called Reface. “We are entering the age of cheap fakes. Because it’s cheap to produce fake content, it can be produced on a large scale, he said.
According to market research firm Grand View Research, the overall generative AI market is expected to exceed $109 billion by 2030, growing at an average annual rate of 35.6% until then. Companies specializing in detection technologies are a growing segment of the industry.
A few months after a Princeton student created it, GPTZero claims that more than a million people have used its program to recognize computer-generated text. defender of reality is one of 414 companies This winter, startup accelerator Y Combinator was selected from 17,000 applications.
replication leak It raised $7.75 million last year, in part to expand its anti-plagiarism service for schools and universities to detect AI in student work. sentinelwhose founders specialize in cybersecurity and information warfare for the Royal Navy and NATO, closed a $1.5 million seed round in 2020, backed in part by one of Skype’s founding engineers, to Help protect democracies from deepfakes and other malicious synthetic media.
Major tech companies are also involved: Intel’s false catcher It claims to be able to identify deepfake videos with 96 percent accuracy, in part by analyzing pixels for subtle signs of blood flow on people’s faces.
exist federal governmentDefense Advanced Research Projects Agency program spending Nearly $30 million This year will run Semantic Forensics, a program that develops algorithms that can automatically detect deepfakes and determine whether they are malicious.
Even OpenAI, which fueled the AI boom when it released the ChatGPT tool late last year, is working on detection services.The San Francisco-based company debuted free tools January helps distinguish text written by humans from text written by artificial intelligence.
OpenAI stresses that while the tool is an improvement over past iterations, it’s still “not entirely reliable.” The tool correctly identified 26 percent of human-generated text, but incorrectly labeled 9 percent of human text as computer-generated.
The OpenAI tool suffers from a common flaw in detection programs: it struggles with short text and non-English writing.In educational settings, plagiarism detection tools like TurnItIn have been accused of inaccurately Classification Essays written by students generated by chatbots.
Detection tools inherently lag behind the generative techniques they are trying to detect. While defenses can recognize the work of new chatbots or image generators such as Google Bard or Midjourney, developers have come up with new iterations that can circumvent that defense. The situation has been described as an arms race or a virus-anti-virus relationship in which one breeds the other over and over again.
“When Midjourney released Midjourney 5, my starting gun went off and I started trying to catch up — and while I was doing that, they were working on Midjourney 6,” said Hany Farid, a professor of computer science at the University of California, Berkeley, who specializes in digital forensics, also Get involved in the artificial intelligence detection industry. “It’s an inherently adversarial game, and while I’m working on the detector, someone is building a better mousetrap, a better synthesizer.”
Joshua Tucker, a professor of political science at New York University and co-director of the Center on Social Media and Politics, said that despite the catching up, many companies are already seeing demand for AI detection from schools and educators. He questioned whether a similar market would emerge before the 2024 election.
“Are we going to see parallel wings of these companies developing to help protect political candidates so they can know when they’re being targeted for this type of thing,” he said.
Experts say synthetically generated video is still fairly clunky and recognizable, but audio cloning and image creation are both very advanced. Distinguishing between real and fake will require digital forensics strategies such as reverse image searches and IP address tracking.
Available detection programs are being tested with “examples that are very different from the wild, where images have been circulating, modified, cropped, reduced, transcoded, annotated, and God knows what else has happened to them,” Mr. Farid said.
“The whitewashing of content makes this a daunting task,” he added.
The Content Authenticity Initiative, a consortium of 1,000 companies and organizations, is a group trying to make generative technology obvious from the start. (It’s led by Adobe, and its membership includes artificial intelligence players like The New York Times and Stability AI.) Rather than cobble together the origins of images or videos later in their lifecycle, the group seeks to establish standards that apply when creating digital images. A traceable certificate of the work.
Adobe said last week that its generation technology, Firefly, will Integration into Google Bardwhich will attach a “nutritional label” to the content it produces, including the date the image was produced and the digital tools used to create it.
Jeff Sakasegawa, trust and security architect at Persona, a company that helps verify consumer identities, said the challenges posed by artificial intelligence are just beginning.
“This wave is building momentum,” he said. “It’s heading towards the shore. I don’t think it’s broken down yet.”