
Geoffrey Hinton is an artificial intelligence pioneer. In 2012, the technology created by Dr. Hinton and two of his graduate students at the University of Toronto became the knowledge base for the artificial intelligence systems that big companies in the tech industry see as key to their future.
On Monday, however, he formally joined a growing chorus of critics who say the companies are in a race against danger as they aggressively develop products based on generative artificial intelligence, a technology that powers popular apps like ChatGPT. Chatbots provide support.
Dr. Hinton said he resigned from his job at Google, where he worked for more than a decade and became one of the most respected voices in the field, so he was free to speak out about the risks of AI, he was a part of him, he Say, now regretting my life’s work.
“I console myself with the usual excuse: If I’m not doing it, someone else is,” Dr. Hinton said in a lengthy interview last week at the restaurant of his Toronto home, just a few miles from where he and his family are. The place is just a short walk away. His students had a breakthrough.
Dr. Hinton’s journey from AI pioneer to doomsayer marks a remarkable moment in the technology industry, perhaps the most significant inflection point in decades. Industry leaders believe the new artificial intelligence system could be as important as the web browser launched in the early 1990s and could lead to breakthroughs in everything from drug research to education.
But what bothers many in the industry is the fear that they are unleashing something dangerous into the wild. Generative AI can already be a tool for misinformation. Soon, this could be a threat to jobs. Somewhere in the future there could be a threat to humanity, says the tech world’s biggest worryer.
“It’s hard to see how to prevent bad guys from using it for bad things,” Dr. Hinton said.
After San Francisco startup OpenAI released a new version of ChatGPT in March, more than 1,000 tech leaders and researchers signed an open letter calling for a six-month moratorium on new system development because AI technologies “pose profound risks to society” and humanity. ”
A few days later, 19 current and former leaders of the 40-year-old Association for the Advancement of Artificial Intelligence, published his letter Warns about the risks of AI The group includes Microsoft Chief Science Officer Eric Horvitz, and the company has deployed OpenAI’s technology in a wide range of products, including its Bing search engine.
Dr. Hinton, often referred to as the “Godfather of AI,” did not sign either letter and said he did not want to publicly criticize Google or other companies before resigning. He notified the company last month of his resignation and spoke on the phone with Sundar Pichai, chief executive of Google parent Alphabet, on Thursday. He declined to discuss publicly the details of his conversation with Pichai.
“We remain committed to taking a responsible approach to AI as we continue to learn to understand emerging risks and innovate boldly,” Google chief scientist Jeff Dean said in a statement.
Dr Hinton is a 75-year-old British expatriate and lifelong academic whose career has been driven by his personal belief in the development and use of AI. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea known as neural networks. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he was unwilling to accept funding from the Pentagon. At the time, most AI research in the United States was funded by the Department of Defense. Dr Hinton strongly opposes the use of artificial intelligence on the battlefield – what he calls “robot soldiers”.
In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to recognize common objects, such as flowers, dogs and cars.
Google $44 million spent Acquisition of a company started by Dr. Hinton and two of his students. Their system has spawned increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever later became chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called the “Nobel Prize of Computing,” for their work on neural networks.
Around the same time, Google, OpenAI, and others began building neural networks that learn from large amounts of digital text. Dr. Hinton believes this is a powerful way for machines to understand and generate language, but it is inferior to the way humans process language.
Then, last year, his perspective changed as Google and OpenAI built systems using vast amounts of data. He still thinks these systems are inferior to the human brain in some ways, but he thinks they eclipse human intelligence in other ways. “Maybe what’s going on in these systems,” he said, “is actually much better than what’s going on in the brain.”
He believes that as companies improve their AI systems, they will become increasingly dangerous. “Look at where it was five years ago and where it is now,” he said of AI technology. “Take the difference and spread it. That’s horrible.”
Until last year, he said, Google had been a “proper steward” of the technology, being careful not to release anything that could cause harm. But now that Microsoft has enhanced its Bing search engine with chatbots — challenging Google’s core business — Google is racing to deploy the same technology. The tech giants are locked in a race that may not be able to stop, Dr Hinton said.
What he worries most is that the Internet will be flooded with fake photos, videos and texts, and ordinary people will “no longer be able to tell the truth from the fake”.
He also worries that AI technology could, in time, disrupt the job market. Today, chatbots like ChatGPT tend to complement human workers, but they can replace paralegals, personal assistants, translators, and others who handle rote tasks. “It takes the drudgery out,” he said. “It could take away much more than that.”
In the future, he worries that future versions of the technology could pose a threat to humans, who often learn unexpected behaviors from the vast amounts of data they analyze. This becomes a problem, he said, because individuals and companies not only allow AI systems to generate their own computer code, but actually run that code themselves. He fears that one day truly autonomous weapons—those killer robots—will become a reality.
“This idea that things can actually become smarter than people — a few people believed it,” he said. “But most people think it’s far away. I think it’s far away. I think it’s 30 to 50 years or more. Obviously, I don’t think so anymore.”
Many other experts, including many of his students and colleagues, said the threat was hypothetical. But Dr Hinton believes that the competition between Google and Microsoft and others will escalate into a global competition that will not stop without some kind of global regulation.
But that may not be possible, he said. Unlike nuclear weapons, he said, there is no way to know whether companies or countries are secretly working on the technology. The best hope is for the world’s leading scientists to collaborate on ways to control the technology. “I don’t think they should scale up until they understand whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he worked on potentially dangerous technology, he would paraphrase the words of Robert Oppenheimer, who led the U.S. in building the atomic bomb: “When you see something that is technically good time, you do it.”
He doesn’t say that anymore.