sourcegraph
September 30, 2023

When President Biden announced severe restrictions on the sale of the most advanced computer chips to China in October, he was selling them in part to give American industry a chance to restore its competitiveness.

But at the Pentagon and the NSC, there is a second agenda: arms control.

The theory is that if the Chinese military doesn’t have access to chips, it could slow down the development of AI-powered weapons. That would give the White House and the world time to figure out some rules for the use of artificial intelligence in sensors, missiles, and cyberweapons, and ultimately guard against some of Hollywood’s worst nightmares—automated killer robots and locked-down computers by their human creators.

Now, the fog of fear surrounding the popular ChatGPT chatbot and other generative AI software makes restricting chips to Beijing look like a temporary solution. When Mr. Biden visited a meeting of technology executives at the White House on Thursday who were working to limit technology risks, his first comment was that “there is great potential and great danger in what you are doing.”

His national security aides said that mirrored recent classified briefings on the potential of new technologies to upend warfare, cyber conflict and — in the most extreme cases — decisions about the use of nuclear weapons.

But even as Biden sounded the warning, Pentagon officials said on a tech forum that they thought the idea of ​​pausing the development of the next generation of ChatGPT and similar software for six months was a bad idea: The Chinese aren’t going to wait, and neither are the Russians.

“If we stop, guess who won’t: Potential adversaries overseas,” Pentagon CIO John Sherman wednesday said. “We have to move on.”

His outspokenness underscores the tension across the defense community today. No one really knows what these new technologies are capable of in developing and controlling weapons, nor what arms control regime, if any, might work.

The hunch is vague but deeply worrying. Can ChatGPT empower bad actors who previously did not have easy access to disruptive technology? Will it hasten a confrontation between superpowers, leaving little time for diplomacy and negotiation?

“The industry is not stupid, and you’ve seen efforts to self-regulate,” said Eric Schmidt, Google’s former chairman, who served as the inaugural chairman of the Defense Innovation Council from 2016 to 2020.

“As a result, the industry is now having a series of informal conversations — all informal — about what AI safety rules look like,” said Mr Schmidt, who has written with former Secretary of State Henry Kissinger , series of articles and books On the potential of artificial intelligence to upend geopolitics.

Anyone who has tested the initial iterations of ChatGPT is aware of the initial effort to put guardrails into the system. These bots won’t answer questions about how to harm someone with a brewed drug, for example, or how to blow up a dam or debilitate a nuclear centrifuge, all actions the US and other countries engage in without the tools of artificial intelligence.

But those action blacklists only slow down the abuse of these systems; few think they can stop such efforts entirely. There are always tricks to getting around safety restrictions, as anyone who has tried to turn off the emergency beep of a car’s seat belt warning system can attest.

While new software has made the problem common, it’s nothing new for the Pentagon. The first rules on the development of autonomous weapons were issued a decade ago. The Pentagon’s Joint Artificial Intelligence Center was established five years ago to explore the use of artificial intelligence in combat.

Some weapons already operate on autopilot. Patriot missiles that shoot down missiles or aircraft entering protected airspace have long had an “automatic” mode. It enables them to open fire without human intervention, faster than a human can react when overwhelmed by incoming targets. But they should be supervised by humans, who can abort the attack if necessary.

The assassination of Iran’s top nuclear scientist, Mohsen Fakhrizadeh, was carried out by Israel’s Mossad using automatic machine guns assisted by artificial intelligence, although it appears to have been highly remote controlled. Russia recently said it had begun building — but not yet deploying — its undersea Poseidon nuclear torpedo. If it lives up to the Russian hype, the weapon will be able to travel autonomously across oceans, evade existing missile defenses, and deliver a nuclear weapon days after launch.

So far, there are no treaties or international agreements covering such autonomous weapons. In an era where arms control agreements are being abandoned faster than they can be negotiated, there is little hope of such an agreement. But ChatGPT and its ilk present a different, and in some ways more complex, type of challenge.

In the military, AI-infused systems can speed up decision-making on the battlefield to the point where they create entirely new risks of accidental strikes, or make decisions based on misleading or deliberately false attack alerts.

“A central question of artificial intelligence in military and national security is how to defend against attacks faster than humans can make decisions, and I don’t think this problem has been solved,” Mr Schmidt said. “In other words, the missile is coming so fast that there must be an automatic response. What if it’s a false signal?”

The Cold War was littered with stories of false alarms—once a training tape meant to practice nuclear reactions was somehow fed into the wrong system and set off alarms for a massive Soviet attack. (Good judgment led everyone to step down.) According to Paul Scharley of the Center for a New American Security in his 2018 book, Army of No Man, “From 1962 to 2002, there were at least 13 nearly With nuclear incidents,” this “confirms the view that near misses are the normal, if dire, occurrence of nuclear weapons.”

For this reason, when tensions among the superpowers were much lower than they are today, successive presidents tried to negotiate to allow more time for nuclear decision-making by all parties so that no one was rushed into conflict. But generative AI has the potential to push countries in another direction, speeding up decision-making.

The good news is that great powers can be careful — because they know what their opponents’ reactions will look like. But so far, there are no agreed rules.

Anja Manuel, a former State Department official and now principal of the Rice, Hadley, Gates and Manuel advisory group, recently wrote that even if China and Russia are not ready for arms control talks on AI, a meeting on the topic will lead to a discussion of what aspects of AI will be discussed. Uses are considered “out of scope”.

Of course, the Pentagon also has concerns about agreeing to many restrictions.

“I’ve fought really hard to get a policy that if you have autonomous components of weapons, you need a way to turn them off,” said Danny Hillis, a computer scientist who works on artificial intelligence. Pioneer of parallel computing. Mr. Hillis, who previously served on the Defense Innovation Council, said Pentagon officials fired back, saying, “If we can shut them down, the enemy can shut them down too.”

Greater risks could come from individual actors, terrorists, ransomware groups, or smaller countries with advanced cyber skills such as North Korea that learn how to clone smaller, less restrictive versions of ChatGPT. They may find that generative AI software is ideal for accelerating cyber attacks and targeting disinformation.

Tom Burt, Microsoft’s trust and security operations chief, is accelerating the use of new technologies to improve its search engine, telling a recent George Washington University forum that he thinks artificial intelligence systems will help defenders detect anomalous behavior faster than they can will help the attacker. Other experts disagree. But he said he was concerned AI would “accelerate” the spread of targeted disinformation.

All of this heralds a new era of arms control.

With no way to stop the spread of ChatGPT and similar software, the best hope is to limit the specialized chips and other computing power needed to advance the technology, some experts said. No doubt this will be one of many different arms control plans proposed in the coming years, at a time when the major nuclear powers at least don’t seem interested in negotiating old weapons, let alone new ones.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *