sourcegraph
October 2, 2023

In Toronto, a candidate in this week’s mayoral election has vowed to clean up homeless encampments, releasing a series of campaign promises interpreted by artificial intelligence, including a fake dystopia of people camping out on downtown streets images, and fabricated images of tents being pitched in parks. .

In New Zealand, a political party released realistic rendering Fake robbers were shown on Instagram going on a rampage at a jewelry store.

In Chicago, the runner-up in April’s mayoral race complained that a Twitter account posing as a news outlet had used artificial intelligence to clone his voice, suggesting he condoned police brutality.

A few months ago, the slow-moving fundraising emails and promotional images written by artificial intelligence for political campaigns began to gradually turn into a steady stream of campaign materials created by the technology, rewriting the political playbook of democratic elections around the world.

Political consultants, election researchers and lawmakers increasingly say building new guardrails, such as legislation to limit synthetic advertising, should be a top priority. Existing defenses, such as social media rules and services that claim to detect AI content, haven’t done much to slow the trend.

As the 2024 U.S. presidential race begins to heat up, some campaigns are already testing the technology. After President Biden announced his re-election bid, the Republican National Committee released a video of an artificially generated image of an apocalyptic scene, while Florida Gov. Ron DeSantis posted a video of former President Donald J. Fake photo of Officer Dr. Anthony Fauci. Official. The Democratic Party experimented with fundraising messages drafted by AI in the spring and found they were often more effective at encouraging participation and donations than copy written entirely by humans.

Some politicians see AI as a way to help reduce campaign costs, by using it to create instant responses to debate issues or attack ads, or to analyze data that might require expensive experts.

At the same time, the technology has the potential to spread disinformation to a broad audience. An unflattering bogus video, an email full of computer-generated false narratives or fabricated images of urban decay could reinforce bias and widen partisan divides by showing voters what they want to see, experts say.

The technology is already much more powerful than doing it manually—it’s not perfect, but it improves quickly and is easy to learn. OpenAI CEO Sam Altman, who fueled the AI ​​boom last year with his popular ChatGPT chatbot, told a Senate subcommittee in May that he was nervous about election season .

He said the technology’s ability to “manipulate, persuade, disinformation one-on-one interactions” was “an important area of ​​concern”.

Rep. Yvette D. Clarke, D-N.Y., said in a statement last month that the 2024 election cycle “will be the first election in which AI-generated content prevails.” She and other congressional Democrats, including Sen. Amy Klobuchar of Minnesota, have introduced legislation that would require disclaimers to accompany political ads that use artificially generated material. A similar bill in Washington state was recently signed into law.

The American Institute of Political Consultants recently condemned the use of deepfakes in political campaigns as a violation of its code of ethics.

“People are tempted to push the envelope and see how far they can go,” said Larry Huynh, the group’s incoming president. “As with any tool, there can be bad uses and bad behavior. , using them to deceive voters, to mislead voters, to make people believe something that doesn’t exist.”

The technology’s recent incursion into politics has surprised Toronto, a city that supports a thriving ecosystem of AI research and start-ups. The mayoral election will be held on Monday.

Former news columnist Anthony Furey, a conservative candidate in the race, recently a file The document runs to dozens of pages and is filled with synthetically generated content to help him establish a tough-on-crime stance.

Upon closer inspection, it becomes clear that many of the images are not real: a lab scene of scientists looking like alien blobs. Another rendering shows a woman wearing a pin with illegible writing pinned to her cardigan. Similar markings also appear in images of warning tape on construction sites. Faure’s campaign also used a composite portrait of a seated woman with two arms folded and a third arm touching her chin.

Other candidates dig this photo for laughs in a debate This month: “We’re actually using real photos,” says Josh Matlow, who shows a photo of his family, adding that “nobody in our photo has three arms.”

Still, the sloppy renderings were used to amplify Mr Faure’s arguments. He gained enough momentum to become one of the biggest names in an election with more than 100 candidates. During the same debate, he admitted to using the technology in his campaign, adding “as we continue to learn more about artificial intelligence, we’re going to have a good laugh here”

Political experts worry that artificial intelligence, if misused, could have a corrosive effect on the democratic process. Misinformation is an ongoing risk; one of Furey’s competitors said in a debate that while her staff uses ChatGPT, they always fact-check its output.

Darrell M. West, a senior fellow at the Brookings Institution, wrote: “If someone is able to create noise, create uncertainty, or fabricate a false narrative, it can be a way to influence voters and win elections. effective way.” in a report last month. “With the 2024 presidential election likely to involve tens of thousands of voters in several states, anything that can push people in one direction or another could end up being decisive.”

Ben Colman, CEO of Reality Defender, which provides artificial intelligence detection services, said that increasingly complex artificial intelligence content is appearing more and more frequently on social networks, and social networks are basically unwilling or unable to detect it. To regulate. “Irreversible damage” will be done until the problem is fixed, he said.

“It’s too little, too late to explain to millions of users after the fact that what they’ve seen and shared was fake,” Mr Coleman said.

For a few days this month, Twitch Live A non-stop, insecure work debate ensued between a synthetic version of Mr. Biden and Mr. Trump. Disinformation experts say both are clearly identified as simulated “artificial intelligence entities”, but if an organized political campaign creates such content and distributes it widely without any disclosure, it could easily degrade authentic material the value of.

Politicians can shrug off responsibility by claiming that real footage of compromising behavior isn’t real, a phenomenon known as the liar’s dividend. Ordinary citizens can create fakes themselves, while others can plunge themselves even deeper into a polarizing information bubble, trusting only the sources they choose to believe.

“If people can’t believe their eyes and ears, they might just say, ‘Who knew?'” Josh A. Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technologies, wrote in an email. . “This may facilitate a shift from a healthy skepticism that encourages good habits, such as lateral reading and seeking out reliable sources, to an unhealthy skepticism where it is impossible to know the truth.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *