Last month, hundreds of prominent figures in artificial intelligence signed an open letter warning that artificial intelligence could one day destroy humanity.
“Mitigating the risk of AI extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” one sentence statement explain.
The letter is the latest in a series of ominous warnings about artificial intelligence that have been particularly detailed. Today’s artificial intelligence systems cannot destroy humans. Some of them can barely add and subtract. So why are those who know best about AI so worried?
Companies, governments or independent researchers could one day deploy powerful artificial intelligence systems to handle everything from business to warfare, says the tech industry’s Cassandras. These systems can do things we don’t want them to do. If humans try to interfere or shut them down, they can resist and even replicate themselves so they can keep functioning.
“Today’s systems are nowhere near posing an existential risk,” said Yoshua Bengio, a professor and artificial intelligence researcher at the University of Montreal. “But in a year, two years, five years? There’s too much uncertainty. That’s the problem.” Where. We’re not sure it’s going to get past a certain point where things get catastrophic.”
Worriers often use a simple metaphor. They say that if you ask a machine to make as many paper clips as it can, it might be taken away and turn everything—humans included—into a paper clip factory.
How does this relate to the real world—or the imagined world in the near future? Companies can give AI systems increasing autonomy and connect them to vital infrastructure, including power grids, stock markets and military weapons. From there, they can cause problems.
To many experts, that didn’t seem all that plausible until the last year or so, when companies like OpenAI demonstrated major improvements to their technology. This shows what will happen if artificial intelligence continues to develop at such a rapid rate.
“AI will gradually be empowered, and as it becomes more autonomous, it may usurp the decision-making and thinking of current humans and human governing bodies,” said Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and the group’s founder. explain. Future of Life Institute, the organization behind one of the two open letters.
“At some point, the big machine that runs society and the economy is not really under human control and cannot be shut down, just like the S&P 500 cannot be shut down,” he said.
Or so the theory goes. Other AI experts think this is an absurd premise.
“It’s a very polite way of expressing my views on risky topics with hypotheticals,” said Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence, a research lab in Seattle.
Are there any signs that AI can do this?
Not quite. But researchers are turning chatbots like ChatGPT into systems that can act on the text they generate. A project called AutoGPT is the best example.
The idea is to give the system goals, like “create a company” or “make money”. It will then move on to finding ways to achieve that goal, especially if it connects to other internet services.
Systems like AutoGPT generate computer programs. It can actually run the programs if researchers give it access to computer servers. In theory, this is a way for AutoGPT to do almost anything online — retrieve information, use apps, create new apps, and even improve itself.
Systems like AutoGPT don’t work very well right now. They tend to get caught in an endless loop. The researchers gave a system all the resources it needed to replicate itself. it can’t do.
These limitations will likely be resolved over time.
“People are actively trying to build systems that can improve themselves,” said Connor Leahy, founder of Conjecture, a company that says it wants to combine AI technology with human values. “Right now, it’s not working. But one day, it will. We don’t know when that day will be.”
Mr Leahy argued that because researchers, companies and criminals set goals such as “making money” on these systems, they could end up breaking into the banking system, fomenting revolution in countries where they hold oil futures, or trying to change them. when copying itself away.
Where do AI systems learn to misbehave?
AI systems such as ChatGPT are built on neural networks and mathematical systems that can learn skills by analyzing data.
Around 2018, companies like Google and OpenAI began building neural networks to learn from reams of digital text culled from the internet. By pinpointing patterns in all this data, these systems learn to generate text on their own, including news articles, poetry, computer programs, and even human conversation. The result: Chatbots like ChatGPT.
Because they learn from more data than even their creators could understand, these systems can also exhibit unexpected behavior.Researchers have recently shown that a system can Hire a Human to Beat Captcha Tests Online. When asked if it was a “robot,” the system lied and said it was a visually impaired person.
Some experts worry that as researchers make these systems more powerful, training them with more and more data, they may pick up more bad habits.
Who is behind these warnings?
In the early 2000s, a young writer named Eliezer Yudkowsky began warning that artificial intelligence could destroy humanity. His online posts have spawned a community of believers. This community, known as rationalists, or effective altruists, wields enormous influence in academia, government think tanks, and the tech industry.
Mr. Yudkowsky and his writings played a key role in the creation of OpenAI and DeepMind, an artificial intelligence lab acquired by Google in 2014. Many people from the “EA” community work in these labs. They believe that because they understand the dangers of AI, they are in the best position to build it.
Two organizations that recently published open letters warning of the risks of artificial intelligence — the Center for Artificial Intelligence Safety and the Future of Life Institute — are closely associated with the movement.
Recent warnings have also come from research pioneers and industry leaders such as Elon Musk, who has long warned of the risks. The latest letter was signed by OpenAI CEO Sam Altman; and Demis Hassabis, who helped found DeepMind and now runs a new AI lab that combines top researchers from DeepMind and Google.
Other well-respected figures have signed a warning letter or two, including Dr. Bengio and Geoffrey Hinton, who recently resigned as a Google executive and researcher. In 2018, they won the Turing Award, often called the “Nobel Prize of Computing,” for their work on neural networks.