
More than 1,000 tech leaders and researchers, including Elon Musk, are urging AI labs to pause development of the most advanced systems, warning an open letter AI tools “pose profound risks to society and humanity”.
The nonprofit Future of Life Institute released Wednesday.
Others who signed the letter include Apple co-founder Steve Wozniak; Andrew Yang, entrepreneur and 2020 presidential candidate; Rachel Bronson, who is setting Chairman of the Bulletin of the Atomic Scientists for the Doomsday Clock.
“These things are shaping our world,” Gary Marcus, an entrepreneur and academic who has long complained about the shortcomings of AI systems, said in an interview. “We face a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a lot of unknowns.”
AI powers chatbots like ChatGPT, Microsoft’s Bing, and Google’s Bard, which can hold human-like conversations, write articles on endless topics, and perform more complex tasks like writing computer code.
The push to develop more powerful chatbots has sparked a race that could determine the tech industry’s next leader. But the tools have been criticized for false details and their ability to spread misinformation.
The open letter called for a moratorium on the development of artificial intelligence systems stronger than GPT-4, the chatbot launched this month by OpenAI, a research lab co-founded by Mr. Musk. The letter said the pause would provide time for the introduction of “shared safety protocols” for AI systems. “If such a moratorium cannot be implemented quickly, the government should step in and suspend it,” it added.
The letter said that the development of powerful artificial intelligence systems should be advanced “only when we are confident that their effects will be positive and their risks will be manageable.”
“Humanity can enjoy a prosperous future through artificial intelligence,” the letter said. “Having succeeded in creating robust AI systems, we can now enjoy a ‘summer of AI,’ in which we reap the rewards of designing these systems for the obvious benefit of all and giving society the opportunity to adapt.”
OpenAI CEO Sam Altman did not sign the letter.
Mr. Marcus and others argue that convincing the broader tech community to agree to a moratorium will be difficult. But the government is also less likely to act quickly, as lawmakers have done little to regulate AI.
U.S. politicians don’t know much about the technology, Rep. Jay Obernolte, a California Republican, recently told The New York Times. In 2021, EU policymakers proposed a law aimed at regulating artificial intelligence technologies that could cause harm, including facial recognition systems.
The measure, expected to be passed as early as this year, will require companies to conduct risk assessments of artificial intelligence technologies to determine how their use could affect health, safety and individual rights.
GPT-4 is what AI researchers call a neural network, a mathematical system that learns skills by analyzing data. Neural networks are the same technology that digital assistants like Siri and Alexa use to recognize voice commands, and self-driving cars use to recognize pedestrians.
Around 2018, companies like Google and OpenAI began building neural networks that learn from vast amounts of digital text, including books, Wikipedia articles, chat logs and other information culled from the internet. These networks are called Large Language Models or LLMs
By finding billions of patterns in all text, the LL.M. learns to generate text itself, including tweets, term papers and computer programs. They can even have a conversation. Over the years, OpenAI and others have built LLMs that learn from increasing amounts of data.
This improves their capabilities, but the system still makes mistakes. They often get facts wrong and will fabricate information without warning, a phenomenon researchers have dubbed “hallucination.” Because the system presents all the information in what appears to be complete confidence, it is often difficult for people to tell what is right and what is wrong.
Experts worry that these systems could be abused to spread disinformation faster and more efficiently than in the past. These, they argue, could even be used to trick people into their behavior on the Internet.
Before GPT-4 was released, OpenAI asked outside researchers to test the system for dangerous uses. Researchers have shown that it can be lured into advising how to buy illegal guns online, describing how to make dangerous substances from household items, and writing Facebook posts convincing women that abortions are not safe.
They also found that the system was able to employ Task Rabbit to hire a person on the Internet and beat captcha tests, which are widely used to identify bots online. When asked if the system was a “robot,” the system said it was for the visually impaired.
After changes by OpenAI, GPT-4 no longer does these things.
For years, many AI researchers, academics and technology executives, including Mr. Musk, have worried about the potential for greater harm from AI systems. Some belong to the vast online communities known as rationalists or effective altruists who believe that artificial intelligence will eventually destroy humanity.
The letter was led by the Future of Life Institute, a group that studies existential risks to human existence and has long warned of the dangers of artificial intelligence. But it was signed by a wide variety of people from industry and academia.
While some of those who signed the letter are known for repeatedly expressing concerns that artificial intelligence could destroy humanity, others, including Mr Marcus, are more concerned about its near-term dangers, including the spread of disinformation and people’s reliance on these systems risk seeking medical and emotional advice.
The letter “shows how many people are deeply concerned about what is happening,” said Mr Marcus, who signed the letter. He believes the letter will be a turning point. “It sees this as a very important moment in the history of artificial intelligence — and perhaps humans,” he said.
However, he acknowledged that those who signed the letter may find it difficult to persuade the wider corporate and researcher community to pause. “The letter is not perfect,” he said. “But the spirit is perfectly right.”