sourcegraph
September 30, 2023

A group of industry leaders warned Tuesday that the artificial intelligence technology they are developing could one day pose an existential threat to humanity and should be considered a social risk on par with pandemics and nuclear war.

“Mitigating the risk of AI extinction should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” one sentence statement Published by the nonprofit Center for Artificial Intelligence Safety.The open letter was signed by more than 350 executives, researchers and engineers working on AI

Signatories include executives from three leading AI companies: OpenAI CEO Sam Altman; Google DeepMind CEO Demis Hassabis; and Anthropic CEO Dario Amodei.

Geoffrey Hinton and Yoshua Bengio, two of three researchers who were awarded the Turing Award for their pioneering work on neural networks, are generally considered to be The “godfathers” of the modern artificial intelligence movement, they signed the statement along with other prominent researchers in the field. (Yann LeCun, a third Turing Award winner who leads Meta’s AI research efforts, had not signed on as of Tuesday.)

The statement comes amid growing concerns about the potential harms of artificial intelligence. Recent advances in so-called large language models — the type of AI systems used by ChatGPT and other chatbots — have raised concerns that AI could soon be used on a large scale to spread misinformation and propaganda, or that it could Eliminate millions of white collar jobs.

Ultimately, some believe that if nothing is done to slow its progress, AI may become powerful enough that it can cause societal-scale disruption within a few years, though researchers sometimes won’t explain how this will happen .

Many industry leaders share these concerns, putting them in the unusual position of arguing that the technologies they are developing — in many cases, in a frantic race to develop faster than their competitors — will have serious risk and should be subject to stricter regulation.

This month, Mr. Altman, Mr. Hassabis and Mr. Amoudi met with President Biden and Vice President Kamala Harris to discuss AI regulation. In Senate testimony after the meeting, Mr Altman warned that the risks of advanced artificial intelligence systems were serious enough to warrant government intervention and called for regulation of the potential harms of AI.

Dan Hendricks, executive director of the Center for AI Safety, said in an interview that the open letter represents an “outing” of some industry leaders who have voiced concerns — but only privately — about the risks of the technology they focus on. Li – development.

“There is a very common misconception that even in the AI ​​community, there are only a few doomed people,” Mr Hendrycks said. “But the truth is, a lot of people are privately expressing concerns about these things.”

Some skeptics believe that AI technology is not mature enough to pose an existential threat. When it comes to AI systems today, they worry more about short-term problems like biased and incorrect responses than long-term dangers.

But others argue that AI is advancing so rapidly that it is already surpassing human-level performance in some domains and will soon surpass it in others. They say the technology has shown signs of advanced capabilities and understanding, raising concerns about “artificial general intelligence,” or AGI, an artificial intelligence that can match or exceed human-level performance in a variety of tasks. Probably won’t stay away.

in a last week’s blog post, Mr. Altman and two other OpenAI executives lay out several ways that powerful AI systems can be managed responsibly. They call for collaboration among leading AI makers, more technical research on large language models and the creation of an international AI safety organization similar to the International Atomic Energy Agency, which aims to control the use of nuclear weapons.

Mr Altman also expressed support for rules requiring large makers of cutting-edge AI models to register for government-issued licenses.

In March, more than 1,000 technologists and researchers signed another open letter calling for a six-month moratorium on the development of the largest AI models, citing concerns about “a runaway race to develop and deploy ever more powerful digital minds.”

Organized by the Future of Life Institute, another AI-focused nonprofit, the letter was signed by Elon Musk and other high-profile tech leaders, but not by many leading AI labs.

The brevity of the Center for AI Safety’s new statement — just 22 words in total — is intended to unite AI experts who may disagree on the nature of specific risks or steps to prevent them but who are generally concerned about powerful AI systems, Mr Hendrycks said.

“We don’t want to push a huge menu of 30 potential interventions,” he said. “When that happens, it dilutes the message.”

The statement was initially shared with several prominent AI experts, including Mr Hinton, who said he quit his job at Google this month so he could speak more freely about the potential harms of AI. From there, it made its way into several major AI labs, where some employees were subsequently contracted.

As millions of people turn to AI chatbots for entertainment, companionship and productivity, and the underlying technology improves rapidly, the urgency of the warnings from AI leaders has grown.

“I think if the technology goes wrong, it could go terribly wrong,” Mr Altman told a Senate subcommittee. “We want to work with the government to prevent this from happening.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *